Science.gov

Sample records for highly specific algorithm

  1. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  2. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms.

    PubMed

    Pacheco, Maria P; Pfau, Thomas; Sauter, Thomas

    2015-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms.

  3. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  4. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  5. High specificity of line-immunoassay based algorithms for recent HIV-1 infection independent of viral subtype and stage of disease

    PubMed Central

    2011-01-01

    Background Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. Methods Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. Results HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. Conclusions The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients. PMID:21943091

  6. Advanced CHP Control Algorithms: Scope Specification

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  7. Specific optimization of genetic algorithm on special algebras

    NASA Astrophysics Data System (ADS)

    Habiballa, Hashim; Novak, Vilem; Dyba, Martin; Schenk, Jiri

    2016-06-01

    Searching for complex finite algebras can be succesfully done by the means of genetic algorithm as we showed in former works. This genetic algorithm needs specific optimization of crossover and mutation. We present details about these optimizations which are already implemented in software application for this task - EQCreator.

  8. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  9. Recognition of plant parts with problem-specific algorithms

    NASA Astrophysics Data System (ADS)

    Schwanke, Joerg; Brendel, Thorsten; Jensch, Peter F.; Megnet, Roland

    1994-06-01

    Automatic micropropagation is necessary to produce cost-effective high amounts of biomass. Juvenile plants are dissected in clean- room environment on particular points on the stem or the leaves. A vision-system detects possible cutting points and controls a specialized robot. This contribution is directed to the pattern- recognition algorithms to detect structural parts of the plant.

  10. The High Dispersion Background Algorithm in NEWSIPS

    NASA Astrophysics Data System (ADS)

    Smith, M. A.; Grady, C. A.; O'Brien, P.; de la Pena, M.; Nichols, J.; Garhart, M.; Coulter, B.; Michalitsianos, A.

    1993-12-01

    A two-dimensional interpolating scheme, followed by modeling of the point spread function, is outlined for use in the final archiving NEWSIPS program in removing background fluxes of high dispersion IUE images. So far our tests have been limited mainly to SWP camera images. An integral facet of our background removal algorithm, basisiue, is its execution in a totally automated environment. Toward this end several conditioning steps are required before the background fluxes can be sampled. These include the removal of ``wiggles" of echelle orders as well as rotation of the camera format and removal of order ``splaying" and avoiding pixels with high fluxes due to permanent image blemishes and cosmic ray hits. Image-specific pixels with such pathologies are eliminated, along with on-order pixels, for a sample of pixels along 26 "swaths" (SWP camera) in the cross-dispersion direction. Smoothed, one-dimensional 7-th degree Chebyshev fits are then computed from the interpolated fluxes modified by a global point spread function determined from the interorder overlap pattern in an ensemble of science images. A second set of continuous Chebyshev functions, perpendicular to the first, is computed next along the positions of the IUE orders by interpolating across fluxes determined from the first set. Thus, this algorithm determines both the of background fluxes at arbitrary locations on the image and also determines the amount of interorder flux-overlap among short-wavelength orders, which is necessary to the final extraction of spectral fluxes. This work has been supported under NASA Contact NAS5-31230 to the Computer Sciences Corporation.

  11. High specific heat superconducting composite

    DOEpatents

    Steyert, Jr., William A.

    1979-01-01

    A composite superconductor formed from a high specific heat ceramic such as gadolinium oxide or gadolinium-aluminum oxide and a conventional metal conductor such as copper or aluminum which are insolubly mixed together to provide adiabatic stability in a superconducting mode of operation. The addition of a few percent of insoluble gadolinium-aluminum oxide powder or gadolinium oxide powder to copper, increases the measured specific heat of the composite by one to two orders of magnitude below the 5.degree. K. level while maintaining the high thermal and electrical conductivity of the conventional metal conductor.

  12. High Rate Pulse Processing Algorithms for Microcalorimeters

    NASA Astrophysics Data System (ADS)

    Tan, Hui; Breus, Dimitry; Hennig, Wolfgang; Sabourov, Konstantin; Collins, Jeffrey W.; Warburton, William K.; Bertrand Doriese, W.; Ullom, Joel N.; Bacrania, Minesh K.; Hoover, Andrew S.; Rabin, Michael W.

    2009-12-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensors can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small in order to maintain good energy resolution, and pulse decay times are normally on the order of milliseconds due to slow thermal relaxation. Therefore, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. For large arrays, however, as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of waveform data to a host computer for post-processing. In this paper, we present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in readout electronics that we are also currently developing, is to achieve sufficiently good energy resolution for most applications while being: a) simple enough to be implemented in the readout electronics; and, b) capable of processing overlapping pulses, and thus achieving much higher output count rates than those achieved by existing algorithms. Details of our algorithms are presented, and their performance is compared to that of the "optimal filter" that is currently the predominantly used pulse processing algorithm in the cryogenic-detector community.

  13. Orientation estimation algorithm applied to high-spin projectiles

    NASA Astrophysics Data System (ADS)

    Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.

    2014-06-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.

  14. High specific activity silicon-32

    DOEpatents

    Phillips, Dennis R.; Brzezinski, Mark A.

    1996-01-01

    A process for preparation of silicon-32 is provided and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution to from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidization state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.

  15. High specific activity silicon-32

    DOEpatents

    Phillips, D.R.; Brzezinski, M.A.

    1996-06-11

    A process for preparation of silicon-32 is provided and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidation state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.

  16. THE HIGH ENERGY TRANSIENT EXPLORER TRIGGERING ALGORITHM

    SciTech Connect

    E. FENIMORE; M. GALASSI

    2001-05-01

    The High Energy Transient Explorer uses a triggering algorithm for gamma-ray bursts that can achieve near the statistical limit by fitting to several background regions to remove trends. Dozens of trigger criteria run simultaneously covering time scales from 80 msec to 10.5 sec or longer. Each criteria is controlled by about 25 constants which gives the flexibility to search wide parameter spaces. On orbit, we have been able to operate at 6{sigma}, a factor of two more sensitive than previous experiments.

  17. Specification of Selected Performance Monitoring and Commissioning Verification Algorithms for CHP Systems

    SciTech Connect

    Brambley, Michael R.; Katipamula, Srinivas

    2006-10-06

    Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.

  18. Ontological Problem-Solving Framework for Assigning Sensor Systems and Algorithms to High-Level Missions

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081

  19. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  20. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  1. High contrast laminography using iterative algorithms

    NASA Astrophysics Data System (ADS)

    Kroupa, M.; Jakubek, J.

    2011-01-01

    3D X-ray imaging of internal structure of large flat objects is often complicated by limited access to all viewing angles or extremely high absorption in certain directions, therefore the standard method of computed tomography (CT) fails. This problem can be solved by the method of laminography. During a laminographic measurement the imaging detector is placed close to the sample while the X-ray source irradiates both sample and detector at different angles. The application of the state-of-the-art pixel detector Medipix in laminography together with adapted tomographic iterative alghorithms for 3D reconstruction of sample structure has been investigated. Iterative algorithms such as EM (Expectation Maximization) and OSEM (Ordered Subset Expectation Maximization) improve the quality of the reconstruction and allow including more complex physical models. In this contribution results and proposed future approaches which could be used for resolution enhancement are presented.

  2. Design specification for the whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.

    1974-01-01

    The necessary requirements and guidelines for the construction of a computer program of the whole-body algorithm are presented. The minimum subsystem models required to effectively simulate the total body response to stresses of interest are (1) cardiovascular (exercise/LBNP/tilt); (2) respiratory (Grodin's model); (3) thermoregulatory (Stolwijk's model); and (4) long-term circulatory fluid and electrolyte (Guyton's model). The whole-body algorithm must be capable of simulating response to stresses from CO2 inhalation, hypoxia, thermal environmental exercise (sitting and supine), LBNP, and tilt (changing body angles in gravity).

  3. GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.

  4. A Proposed India-Specific Algorithm for Management of Type 2 Diabetes.

    PubMed

    2016-06-01

    Several algorithms and guidelines have been proposed by countries and international professional bodies; however, no recent updated management algorithm is available for Asian Indians. Specifically, algorithms developed and validated in developed nations may not be relevant or applicable to patients in India because of several factors: early age of onset of diabetes, occurrence of diabetes in nonobese and sometimes lean people, differences in the relative contributions of insulin resistance and β-cell dysfunction, marked postprandial glycemia, frequent infections including tuberculosis, low access to healthcare and medications in people of low socioeconomic stratum, ethnic dietary practices (e.g., ingestion of high-carbohydrate diets), and inadequate education regarding hypoglycemia. All these factors should be considered to choose appropriate therapeutic option in this population. The proposed algorithm is simple, suggests less expensive drugs, and tries to provide an effective and comprehensive framework for delivery of diabetes therapy in primary care in India. The proposed guidelines agree with international recommendations in favoring individualization of therapeutic targets as well as modalities of treatment in a flexible manner suitable to the Indian population. PMID:26909751

  5. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  6. Nonlinear algorithm for task-specific tomosynthetic image reconstruction

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Underhill, Hunter A.; Hemler, Paul F.; Lavery, John E.

    1999-05-01

    This investigation defines and tests a simple, nonlinear, task-specific method for rapid tomosynthetic reconstruction of radiographic images designed to allow an increase in specificity at the expense of sensitivity. Representative lumpectomy specimens containing cancer from human breasts were radiographed with a digital mammographic machine. Resulting projective data were processed to yield a series of tomosynthetic slices distributed throughout the breast. Five board-certified radiologists compared tomographic displays of these tissues processed both linearly (control) and nonlinearly (test) and ranked them in terms of their perceived interpretability. In another task, a different set of nine observers estimated the relative depths of six holes bored in a solid Lucite block as perceived when observed in three dimensions as a tomosynthesized series of test and control slices. All participants preferred the nonlinearly generated tomosynthetic mammograms to those produced conventionally, with or without subsequent deblurring by means of iterative deconvolution. The result was similar (p less than 0.015) when the hole-depth experiment was performed objectively. We therefore conclude for certain tasks that are unduly compromised by tomosynthetic blurring, the nonlinear tomosynthetic reconstruction method described here may improve diagnostic performance with a negligible increase in cost or complexity.

  7. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063

  8. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  9. A predictor-corrector guidance algorithm for use in high-energy aerobraking system studies

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Powell, Richard W.

    1991-01-01

    A three-degree-of-freedom predictor-corrector guidance algorithm has been developed specifically for use in high-energy aerobraking performance evaluations. The present study reports on both the development and capabilities of this guidance algorithm to the design of manned Mars aero-braking vehicles. Atmospheric simulations are performed to demonstrate the applicability of this algorithm and to evaluate the effect of atmospheric uncertainties upon the mission requirements. The off-nominal conditions simulated result from atmospheric density and aerodynamic characteristic mispredictions. The guidance algorithm is also used to provide relief from the high deceleration levels typically encountered in a high-energy aerobraking mission profile. Through this analysis, bank-angle modulation is shown to be an effective means of providing deceleration relief. Furthermore, the capability of the guidance algorithm to manage off-nominal vehicle aerodynamic and atmospheric density variations is demonstrated.

  10. On the importance of FIB-SEM specific segmentation algorithms for porous media

    SciTech Connect

    Salzer, Martin; Thiele, Simon; Zengerle, Roland; Schmidt, Volker

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  11. High Flux Isotope Reactor technical specifications

    SciTech Connect

    Not Available

    1985-11-01

    This report gives technical specifications for the High Flux Isotope Reactor (HFIR) on the following: safety limits and limiting safety system settings; limiting conditions for operation; surveillance requirements; design features; and administrative controls.

  12. An improved dehazing algorithm of aerial high-definition image

    NASA Astrophysics Data System (ADS)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  13. A highly accurate heuristic algorithm for the haplotype assembly problem

    PubMed Central

    2013-01-01

    Background Single nucleotide polymorphisms (SNPs) are the most common form of genetic variation in human DNA. The sequence of SNPs in each of the two copies of a given chromosome in a diploid organism is referred to as a haplotype. Haplotype information has many applications such as gene disease diagnoses, drug design, etc. The haplotype assembly problem is defined as follows: Given a set of fragments sequenced from the two copies of a chromosome of a single individual, and their locations in the chromosome, which can be pre-determined by aligning the fragments to a reference DNA sequence, the goal here is to reconstruct two haplotypes (h1, h2) from the input fragments. Existing algorithms do not work well when the error rate of fragments is high. Here we design an algorithm that can give accurate solutions, even if the error rate of fragments is high. Results We first give a dynamic programming algorithm that can give exact solutions to the haplotype assembly problem. The time complexity of the algorithm is O(n × 2t × t), where n is the number of SNPs, and t is the maximum coverage of a SNP site. The algorithm is slow when t is large. To solve the problem when t is large, we further propose a heuristic algorithm on the basis of the dynamic programming algorithm. Experiments show that our heuristic algorithm can give very accurate solutions. Conclusions We have tested our algorithm on a set of benchmark datasets. Experiments show that our algorithm can give very accurate solutions. It outperforms most of the existing programs when the error rate of the input fragments is high. PMID:23445458

  14. A fast directional algorithm for high-frequency electromagnetic scattering

    SciTech Connect

    Tsuji, Paul; Ying Lexing

    2011-06-20

    This paper is concerned with the fast solution of high-frequency electromagnetic scattering problems using the boundary integral formulation. We extend the O(N log N) directional multilevel algorithm previously proposed for the acoustic scattering case to the vector electromagnetic case. We also detail how to incorporate the curl operator of the magnetic field integral equation into the algorithm. When combined with a standard iterative method, this results in an almost linear complexity solver for the combined field integral equations. In addition, the butterfly algorithm is utilized to compute the far field pattern and radar cross section with O(N log N) complexity.

  15. Algorithms for high aspect ratio oriented triangulations

    NASA Technical Reports Server (NTRS)

    Posenau, Mary-Anne K.

    1995-01-01

    Grid generation plays an integral part in the solution of computational fluid dynamics problems for aerodynamics applications. A major difficulty with standard structured grid generation, which produces quadrilateral (or hexahedral) elements with implicit connectivity, has been the requirement for a great deal of human intervention in developing grids around complex configurations. This has led to investigations into unstructured grids with explicit connectivities, which are primarily composed of triangular (or tetrahedral) elements, although other subdivisions of convex cells may be used. The existence of large gradients in the solution of aerodynamic problems may be exploited to reduce the computational effort by using high aspect ratio elements in high gradient regions. However, the heuristic approaches currently in use do not adequately address this need for high aspect ratio unstructured grids. High aspect ratio triangulations very often produce the large angles that are to be avoided. Point generation techniques based on contour or front generation are judged to be the most promising in terms of being able to handle complicated multiple body objects, with this technique lending itself well to adaptivity. The eventual goal encompasses several phases: first, a partitioning phase, in which the Voronoi diagram of a set of points and line segments (the input set) will be generated to partition the input domain; second, a contour generation phase in which body-conforming contours are used to subdivide the partition further as well as introduce the foundation for aspect ratio control, and; third, a Steiner triangulation phase in which points are added to the partition to enable triangulation while controlling angle bounds and aspect ratio. This provides a combination of the advancing front/contour techniques and refinement. By using a front, aspect ratio can be better controlled. By using refinement, bounds on angles can be maintained, while attempting to minimize

  16. A High Precision Terahertz Wave Image Reconstruction Algorithm

    PubMed Central

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  17. A High Precision Terahertz Wave Image Reconstruction Algorithm.

    PubMed

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  18. A high capacity 3D steganography algorithm.

    PubMed

    Chao, Min-Wen; Lin, Chao-hung; Yu, Cheng-Wei; Lee, Tong-Yee

    2009-01-01

    In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multilayered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models.

  19. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto

    2014-10-01

    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  20. Production of high specific activity silicon-32

    SciTech Connect

    Phillips, D.R.; Brzezinski, M.A.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development Project (LDRD) at Los Alamos National Laboratory (LANL). There were two primary objectives for the work performed under this project. The first was to take advantage of capabilities and facilities at Los Alamos to produce the radionuclide {sup 32}Si in unusually high specific activity. The second was to combine the radioanalytical expertise at Los Alamos with the expertise at the University of California to develop methods for the application of {sup 32}Si in biological oceanographic research related to global climate modeling. The first objective was met by developing targetry for proton spallation production of {sup 32}Si in KCl targets and chemistry for its recovery in very high specific activity. The second objective was met by developing a validated field-useable, radioanalytical technique, based upon gas-flow proportional counting, to measure the dynamics of silicon uptake by naturally occurring diatoms.

  1. Development of High Specific Strength Envelope Materials

    NASA Astrophysics Data System (ADS)

    Komatsu, Keiji; Sano, Masa-Aki; Kakuta, Yoshiaki

    Progress in materials technology has produced a much more durable synthetic fabric envelope for the non-rigid airship. Flexible materials are required to form airship envelopes, ballonets, load curtains, gas bags and covering rigid structures. Polybenzoxazole fiber (Zylon) and polyalirate fiber (Vectran) show high specific tensile strength, so that we developed membrane using these high specific tensile strength fibers as a load carrier. The main material developed is a Zylon or Vectran load carrier sealed internally with a polyurethane bonded inner gas retention film (EVOH). The external surface provides weather protecting with, for instance, a titanium oxide integrated polyurethane or Tedlar film. The mechanical test results show that tensile strength 1,000 N/cm is attained with weight less than 230g/m2. In addition to the mechanical properties, temperature dependence of the joint strength and solar absorptivity and emissivity of the surface are measured. 

  2. Electromagnetic properties of high specific surface minerals

    NASA Astrophysics Data System (ADS)

    Klein, Katherine Anne

    Interparticle electrical forces play a dominant role in the behaviour of high specific surface minerals, such as clays. This fact encourages the use of small electromagnetic perturbations to assess the microscale properties of these materials. Thus, this research focuses on using electromagnetic waves to understand fundamental particle-particle and particle-fluid interactions, and fabric formation in high specific surface mineral-fluid mixtures (particle size <~1 μm). Topics addressed in this study include: the role of specific surface and double layer phenomena in the engineering behaviour of clay-water-electrolyte mixtures; the interplay between surface conduction, double layer polarization, and interfacial polarization; the relationship between fabric, permittivity, shear wave velocity, and engineering properties in soft slurries; and the effect of ferromagnetic impurities on electromagnetic measurements. The critical role of specific surface on the engineering properties of fine-grained soils is demonstrated through fundamental principles and empirical correlations. Afterwards, the effect of specific surface on the electromagnetic properties of particulate materials is studied using simple microscale analyses of conduction and polarization phenomena in particle-fluid mixtures, and corroborated by experimentation. These results clarify the relative importance of specific surface, water content, electrolyte type, and ionic concentration on the electrical properties of particulate materials. The sensitivity of electromagnetic parameters to particle orientation is addressed in light of the potential assessment of anisotropy in engineering properties. It is shown that effective conductivity measurements provide a robust method to determine electrical anisotropy in particle-fluid mixtures. However, real relative dielectric measurements at frequencies below 1 MHz are unreliable due to electrode effects (especially in highly conductive mixtures). The relationship

  3. Wp specific methylation of highly proliferated LCLs.

    PubMed

    Park, Jung-Hoon; Jeon, Jae-Pil; Shim, Sung-Mi; Nam, Hye-Young; Kim, Joon-Woo; Han, Bok-Ghee; Lee, Suman

    2007-06-29

    The epigenetic regulation of viral genes may be important for the life cycle of EBV. We determined the methylation status of three viral promoters (Wp, Cp, Qp) from EBV B-lymphoblastoid cell lines (LCLs) by pyrosequencing. Our pyrosequencing data showed that the CpG region of Wp was methylated, but the others were not. Interestingly, Wp methylation was increased with proliferation of LCLs. Wp methylation was as high as 74.9% in late-passage LCLs, but 25.6% in early-passage LCLs. From two Burkitt's lymphoma cell lines, Wp specific hypermethylation was also found (>80%). Interestingly, the expression of EBNA2 gene which located directly next to Wp was associated with its methylation. Our data suggested that Wp specific methylation may be important for the indicator of the proliferation status of LCLs, and the epigenetic viral gene regulation of EBNA2 gene by Wp should be further defined possibly with other biological processes.

  4. Wp specific methylation of highly proliferated LCLs

    SciTech Connect

    Park, Jung-Hoon; Jeon, Jae-Pil; Shim, Sung-Mi; Nam, Hye-Young; Kim, Joon-Woo; Han, Bok-Ghee; Lee, Suman . E-mail: suman@cha.ac.kr

    2007-06-29

    The epigenetic regulation of viral genes may be important for the life cycle of EBV. We determined the methylation status of three viral promoters (Wp, Cp, Qp) from EBV B-lymphoblastoid cell lines (LCLs) by pyrosequencing. Our pyrosequencing data showed that the CpG region of Wp was methylated, but the others were not. Interestingly, Wp methylation was increased with proliferation of LCLs. Wp methylation was as high as 74.9% in late-passage LCLs, but 25.6% in early-passage LCLs. From two Burkitt's lymphoma cell lines, Wp specific hypermethylation was also found (>80%). Interestingly, the expression of EBNA2 gene which located directly next to Wp was associated with its methylation. Our data suggested that Wp specific methylation may be important for the indicator of the proliferation status of LCLs, and the epigenetic viral gene regulation of EBNA2 gene by Wp should be further defined possibly with other biological processes.

  5. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  6. Production Of High Specific Activity Copper-67

    DOEpatents

    Jamriska, Sr., David J.; Taylor, Wayne A.; Ott, Martin A.; Fowler, Malcolm; Heaton, Richard C.

    2003-10-28

    A process for the selective production and isolation of high specific activity Cu.sup.67 from proton-irradiated enriched Zn.sup.70 target comprises target fabrication, target irradiation with low energy (<25 MeV) protons, chemical separation of the Cu.sup.67 product from the target material and radioactive impurities of gallium, cobalt, iron, and stable aluminum via electrochemical methods or ion exchange using both anion and cation organic ion exchangers, chemical recovery of the enriched Zn.sup.70 target material, and fabrication of new targets for re-irradiation is disclosed.

  7. Production Of High Specific Activity Copper-67

    DOEpatents

    Jamriska, Sr., David J.; Taylor, Wayne A.; Ott, Martin A.; Fowler, Malcolm; Heaton, Richard C.

    2002-12-03

    A process for the selective production and isolation of high specific activity cu.sup.67 from proton-irradiated enriched Zn.sup.70 target comprises target fabrication, target irradiation with low energy (<25 MeV) protons, chemical separation of the Cu.sup.67 product from the target material and radioactive impurities of gallium, cobalt, iron, and stable aluminum via electrochemical methods or ion exchange using both anion and cation organic ion exchangers, chemical recovery of the enriched Zn.sup.70 target material, and fabrication of new targets for re-irradiation is disclosed.

  8. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  9. Stride Search: a general algorithm for storm detection in high-resolution climate data

    NASA Astrophysics Data System (ADS)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; Mundt, Miranda R.

    2016-04-01

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.

  10. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell has been designed and tested to deliver high capacity at a C/1.5 discharge rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet made at a discharge rate this high in the 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters, performance, and future test plans are described.

  11. A high performance hardware implementation image encryption with AES algorithm

    NASA Astrophysics Data System (ADS)

    Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab

    2011-06-01

    This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.

  12. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  13. A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kopaczka, Marcin; Wimmer, Andreas; Platsch, Günther; Declerck, Jérôme

    2014-03-01

    Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.

  14. Production of high specific activity silicon-32

    DOEpatents

    Phillips, Dennis R.; Brzezinski, Mark A.

    1994-01-01

    A process for preparation of silicon-32 is provide and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution to from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidization state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.

  15. Using patient-specific phantoms to evaluate deformable image registration algorithms for adaptive radiation therapy.

    PubMed

    Stanley, Nick; Glide-Hurst, Carri; Kim, Jinkoo; Adams, Jeffrey; Li, Shunshan; Wen, Ning; Chetty, Indrin J; Zhong, Hualiang

    2013-11-04

    The quality of adaptive treatment planning depends on the accuracy of its underlying deformable image registration (DIR). The purpose of this study is to evaluate the performance of two DIR algorithms, B-spline-based deformable multipass (DMP) and deformable demons (Demons), implemented in a commercial software package. Evaluations were conducted using both computational and physical deformable phantoms. Based on a finite element method (FEM), a total of 11 computational models were developed from a set of CT images acquired from four lung and one prostate cancer patients. FEM generated displacement vector fields (DVF) were used to construct the lung and prostate image phantoms. Based on a fast-Fourier transform technique, image noise power spectrum was incorporated into the prostate image phantoms to create simulated CBCT images. The FEM-DVF served as a gold standard for verification of the two registration algorithms performed on these phantoms. The registration algorithms were also evaluated at the homologous points quantified in the CT images of a physical lung phantom. The results indicated that the mean errors of the DMP algorithm were in the range of 1.0 ~ 3.1 mm for the computational phantoms and 1.9 mm for the physical lung phantom. For the computational prostate phantoms, the corresponding mean error was 1.0-1.9 mm in the prostate, 1.9-2.4mm in the rectum, and 1.8-2.1 mm over the entire patient body. Sinusoidal errors induced by B-spline interpolations were observed in all the displacement profiles of the DMP registrations. Regions of large displacements were observed to have more registration errors. Patient-specific FEM models have been developed to evaluate the DIR algorithms implemented in the commercial software package. It has been found that the accuracy of these algorithms is patient dependent and related to various factors including tissue deformation magnitudes and image intensity gradients across the regions of interest. This may suggest that

  16. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  17. Optimization of a Turboprop UAV for Maximum Loiter and Specific Power Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dinc, Ali

    2016-09-01

    In this study, a genuine code was developed for optimization of selected parameters of a turboprop engine for an unmanned aerial vehicle (UAV) by employing elitist genetic algorithm. First, preliminary sizing of a UAV and its turboprop engine was done, by the code in a given mission profile. Secondly, single and multi-objective optimization were done for selected engine parameters to maximize loiter duration of UAV or specific power of engine or both. In single objective optimization, as first case, UAV loiter time was improved with an increase of 17.5% from baseline in given boundaries or constraints of compressor pressure ratio and burner exit temperature. In second case, specific power was enhanced by 12.3% from baseline. In multi-objective optimization case, where previous two objectives are considered together, loiter time and specific power were increased by 14.2% and 9.7% from baseline respectively, for the same constraints.

  18. High pressure humidification columns: Design equations, algorithm, and computer code

    SciTech Connect

    Enick, R.M.; Klara, S.M.; Marano, J.J.

    1994-07-01

    This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.

  19. A fast general-purpose clustering algorithm based on FPGAs for high-throughput data processing

    NASA Astrophysics Data System (ADS)

    Annovi, A.; Beretta, M.

    2010-05-01

    We present a fast general-purpose algorithm for high-throughput clustering of data "with a two-dimensional organization". The algorithm is designed to be implemented with FPGAs or custom electronics. The key feature is a processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the readout, without suffering from combinatorial delays due to looping multiple times through all the data. This feature makes this algorithm especially well suited for problems where the data have high density, e.g. in the case of tracking devices working under high-luminosity condition such as those of LHC or super-LHC. The algorithm is organized in two steps: the first step (core) clusters the data; the second step analyzes each cluster of data to extract the desired information. The current algorithm is developed as a clustering device for modern high-energy physics pixel detectors. However, the algorithm has much broader field of applications. In fact, its core does not specifically rely on the kind of data or detector it is working for, while the second step can and should be tailored for a given application. For example, in case of spatial measurement with silicon pixel detectors, the second step performs center of charge calculation. Applications can thus be foreseen to other detectors and other scientific fields ranging from HEP calorimeters to medical imaging. An additional advantage of this two steps approach is that the typical clustering related calculations (second step) are separated from the combinatorial complications of clustering. This separation simplifies the design of the second step and it enables it to perform sophisticated calculations achieving offline quality in online applications. The algorithm is general purpose in the sense that only minimal assumptions on the kind of clustering to be performed are made.

  20. A moving frame algorithm for high Mach number hydrodynamics

    NASA Astrophysics Data System (ADS)

    Trac, Hy; Pen, Ue-Li

    2004-07-01

    We present a new approach to Eulerian computational fluid dynamics that is designed to work at high Mach numbers encountered in astrophysical hydrodynamic simulations. Standard Eulerian schemes that strictly conserve total energy suffer from the high Mach number problem and proposed solutions to additionally solve the entropy or thermal energy still have their limitations. In our approach, the Eulerian conservation equations are solved in an adaptive frame moving with the fluid where Mach numbers are minimized. The moving frame approach uses a velocity decomposition technique to define local kinetic variables while storing the bulk kinetic components in a smoothed background velocity field that is associated with the grid velocity. Gravitationally induced accelerations are added to the grid, thereby minimizing the spurious heating problem encountered in cold gas flows. Separately tracking local and bulk flow components allows thermodynamic variables to be accurately calculated in both subsonic and supersonic regions. A main feature of the algorithm, that is not possible in previous Eulerian implementations, is the ability to resolve shocks and prevent spurious heating where both the pre-shock and post-shock fluid are supersonic. The hybrid algorithm combines the high-resolution shock capturing ability of the second-order accurate Eulerian TVD scheme with a low-diffusion Lagrangian advection scheme. We have implemented a cosmological code where the hydrodynamic evolution of the baryons is captured using the moving frame algorithm while the gravitational evolution of the collisionless dark matter is tracked using a particle-mesh N-body algorithm. Hydrodynamic and cosmological tests are described and results presented. The current code is fast, memory-friendly, and parallelized for shared-memory machines.

  1. International multidimensional authenticity specification (IMAS) algorithm for detection of commercial pomegranate juice adulteration.

    PubMed

    Zhang, Yanjun; Krueger, Dana; Durst, Robert; Lee, Rupo; Wang, David; Seeram, Navindra; Heber, David

    2009-03-25

    The pomegranate fruit ( Punica granatum ) has become an international high-value crop for the production of commercial pomegranate juice (PJ). The perceived consumer value of PJ is due in large part to its potential health benefits based on a significant body of medical research conducted with authentic PJ. To establish criteria for authenticating PJ, a new International Multidimensional Authenticity Specifications (IMAS) algorithm was developed through consideration of existing databases and comprehensive chemical characterization of 45 commercial juice samples from 23 different manufacturers in the United States. In addition to analysis of commercial juice samples obtained in the United States, data from other analyses of pomegranate juice and fruits including samples from Iran, Turkey, Azerbaijan, Syria, India, and China were considered in developing this protocol. There is universal agreement that the presence of a highly constant group of six anthocyanins together with punicalagins characterizes polyphenols in PJ. At a total sugar concentration of 16 degrees Brix, PJ contains characteristic sugars including mannitol at >0.3 g/100 mL. Ratios of glucose to mannitol of 4-15 and of glucose to fructose of 0.8-1.0 are also characteristic of PJ. In addition, no sucrose should be present because of isomerase activity during commercial processing. Stable isotope ratio mass spectrometry as > -25 per thousand assures that there is no added corn or cane sugar added to PJ. Sorbitol was present at <0.025 g/100 mL; maltose and tartaric acid were not detected. The presence of the amino acid proline at >25 mg/L is indicative of added grape products. Malic acid at >0.1 g/100 mL indicates adulteration with apple, pear, grape, cherry, plum, or aronia juice. Other adulteration methods include the addition of highly concentrated aronia, blueberry, or blackberry juices or natural grape pigments to poor-quality juices to imitate the color of pomegranate juice, which results in

  2. International multidimensional authenticity specification (IMAS) algorithm for detection of commercial pomegranate juice adulteration.

    PubMed

    Zhang, Yanjun; Krueger, Dana; Durst, Robert; Lee, Rupo; Wang, David; Seeram, Navindra; Heber, David

    2009-03-25

    The pomegranate fruit ( Punica granatum ) has become an international high-value crop for the production of commercial pomegranate juice (PJ). The perceived consumer value of PJ is due in large part to its potential health benefits based on a significant body of medical research conducted with authentic PJ. To establish criteria for authenticating PJ, a new International Multidimensional Authenticity Specifications (IMAS) algorithm was developed through consideration of existing databases and comprehensive chemical characterization of 45 commercial juice samples from 23 different manufacturers in the United States. In addition to analysis of commercial juice samples obtained in the United States, data from other analyses of pomegranate juice and fruits including samples from Iran, Turkey, Azerbaijan, Syria, India, and China were considered in developing this protocol. There is universal agreement that the presence of a highly constant group of six anthocyanins together with punicalagins characterizes polyphenols in PJ. At a total sugar concentration of 16 degrees Brix, PJ contains characteristic sugars including mannitol at >0.3 g/100 mL. Ratios of glucose to mannitol of 4-15 and of glucose to fructose of 0.8-1.0 are also characteristic of PJ. In addition, no sucrose should be present because of isomerase activity during commercial processing. Stable isotope ratio mass spectrometry as > -25 per thousand assures that there is no added corn or cane sugar added to PJ. Sorbitol was present at <0.025 g/100 mL; maltose and tartaric acid were not detected. The presence of the amino acid proline at >25 mg/L is indicative of added grape products. Malic acid at >0.1 g/100 mL indicates adulteration with apple, pear, grape, cherry, plum, or aronia juice. Other adulteration methods include the addition of highly concentrated aronia, blueberry, or blackberry juices or natural grape pigments to poor-quality juices to imitate the color of pomegranate juice, which results in

  3. Algorithmic Tools for Mining High-Dimensional Cytometry Data.

    PubMed

    Chester, Cariad; Maecker, Holden T

    2015-08-01

    The advent of mass cytometry has led to an unprecedented increase in the number of analytes measured in individual cells, thereby increasing the complexity and information content of cytometric data. Although this technology is ideally suited to the detailed examination of the immune system, the applicability of the different methods for analyzing such complex data is less clear. Conventional data analysis by manual gating of cells in biaxial dot plots is often subjective, time consuming, and neglectful of much of the information contained in a highly dimensional cytometric dataset. Algorithmic data mining has the promise to eliminate these concerns, and several such tools have been applied recently to mass cytometry data. We review computational data mining tools that have been used to analyze mass cytometry data, outline their differences, and comment on their strengths and limitations. This review will help immunologists to identify suitable algorithmic tools for their particular projects.

  4. Chance of Vulnerability Reduction in Application-Specific NoC through Distance Aware Mapping Algorithm

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi

    2011-08-01

    Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.

  5. A new training algorithm using artificial neural networks to classify gender-specific dynamic gait patterns.

    PubMed

    Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim

    2015-01-01

    The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.

  6. Specificity and Sensitivity of Claims-Based Algorithms for Identifying Members of Medicare+Choice Health Plans That Have Chronic Medical Conditions

    PubMed Central

    Rector, Thomas S; Wickstrom, Steven L; Shah, Mona; Thomas Greeenlee, N; Rheault, Paula; Rogowski, Jeannette; Freedman, Vicki; Adams, John; Escarce, José J

    2004-01-01

    Objective To examine the effects of varying diagnostic and pharmaceutical criteria on the performance of claims-based algorithms for identifying beneficiaries with hypertension, heart failure, chronic lung disease, arthritis, glaucoma, and diabetes. Study Setting Secondary 1999–2000 data from two Medicare+Choice health plans. Study Design Retrospective analysis of algorithm specificity and sensitivity. Data Collection Physician, facility, and pharmacy claims data were extracted from electronic records for a sample of 3,633 continuously enrolled beneficiaries who responded to an independent survey that included questions about chronic diseases. Principal Findings Compared to an algorithm that required a single medical claim in a one-year period that listed the diagnosis, either requiring that the diagnosis be listed on two separate claims or that the diagnosis to be listed on one claim for a face-to-face encounter with a health care provider significantly increased specificity for the conditions studied by 0.03 to 0.11. Specificity of algorithms was significantly improved by 0.03 to 0.17 when both a medical claim with a diagnosis and a pharmacy claim for a medication commonly used to treat the condition were required. Sensitivity improved significantly by 0.01 to 0.20 when the algorithm relied on a medical claim with a diagnosis or a pharmacy claim, and by 0.05 to 0.17 when two years rather than one year of claims data were analyzed. Algorithms that had specificity more than 0.95 were found for all six conditions. Sensitivity above 0.90 was not achieved all conditions. Conclusions Varying claims criteria improved the performance of case-finding algorithms for six chronic conditions. Highly specific, and sometimes sensitive, algorithms for identifying members of health plans with several chronic conditions can be developed using claims data. PMID:15533190

  7. Accuracy of Optimized Branched Algorithms to Assess Activity-Specific PAEE

    PubMed Central

    Edwards, Andy G.; Hill, James O.; Byrnes, William C.; Browning, Raymond C.

    2009-01-01

    PURPOSE To assess the activity-specific accuracy achievable by branched algorithm (BA) analysis of simulated daily-living physical activity energy expenditure (PAEE) within a sedentary population. METHODS Sedentary men (n=8) and women (n=8) first performed a treadmill calibration protocol, during which heart rate (HR), accelerometry (ACC), and PAEE were measured in 1-minute epochs. From these data, HR-PAEE, and ACC-PAEE regressions were constructed and used in each of six analytic models to predict PAEE from ACC and HR data collected during a subsequent simulated daily-living protocol. Criterion PAEE was measured during both protocols via indirect calorimetry. The accuracy achieved by each model was assessed by the root mean square of the difference between model-predicted daily–living PAEE and the criterion daily-living PAEE (expressed here as % of mean daily living PAEE). RESULTS Across the range of activities an unconstrained post hoc optimized branched algorithm best predicted criterion PAEE. Estimates using individual calibration were generally more accurate than those using group calibration (14 vs. 16 % error, respectively). These analyses also performed well within each of the six daily-living activities, but systematic errors appeared for several of those activities, which may be explained by an inability of the algorithm to simultaneously accommodate a heterogeneous range of activities. Analyses of between mean square error by subject and activity suggest that optimization involving minimization of RMS for total daily-living PAEE is associated with decreased error between subjects but increased error between activities. CONCLUSION The performance of post hoc optimized branched algorithms may be limited by heterogeneity in the daily-living activities being performed. PMID:19952842

  8. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  9. Finite element solution for energy conservation using a highly stable explicit integration algorithm

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Manhardt, P. D.

    1972-01-01

    Theoretical derivation of a finite element solution algorithm for the transient energy conservation equation in multidimensional, stationary multi-media continua with irregular solution domain closure is considered. The complete finite element matrix forms for arbitrarily irregular discretizations are established, using natural coordinate function representations. The algorithm is embodied into a user-oriented computer program (COMOC) which obtains transient temperature distributions at the node points of the finite element discretization using a highly stable explicit integration procedure with automatic error control features. The finite element algorithm is shown to posses convergence with discretization for a transient sample problem. The condensed form for the specific heat element matrix is shown to be preferable to the consistent form. Computed results for diverse problems illustrate the versatility of COMOC, and easily prepared output subroutines are shown to allow quick engineering assessment of solution behavior.

  10. Simulations of high-Tc superconductors using the DCA+ algorithm

    NASA Astrophysics Data System (ADS)

    Staar, Peter

    2015-03-01

    For over three decades, the high Tc-cuprates have been a gigantic challenge for condensed matter theory. Even the simplest representation of these materials, i.e. the single band Hubbard model, is hard to solve quantitatively and its phase-diagram is therefore elusive. In this talk, we present the recent algorithmic and implementation advances to the Dynamical Cluster Approximation (DCA). The algorithmic advances allow us to determine self-consistently a continuous self-energy in momentum space, which in turn reduces the cluster-shape dependency of the superconducting transition temperature and thus accelerates the convergence of the latter versus cluster-size. Furthermore, the introduction of the smooth self-energy suppresses artificial correlations and thus reduces the fermionic sign-problem, allowing us to simulate larger clusters at much lower temperatures. By combining these algorithmic improvements with a very efficient GPU accelerated QMC-solver, we are now able to determine the superconducting transition temperature accurately and show that the Cooper-pairs have indeed a d-wave structure, as was predicted by Zhang and Rice.

  11. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be

  12. Fuzzy logic algorithm to extract specific interaction forces from atomic force microscopy data

    NASA Astrophysics Data System (ADS)

    Kasas, Sandor; Riederer, Beat M.; Catsicas, Stefan; Cappella, Brunero; Dietler, Giovanni

    2000-05-01

    The atomic force microscope is not only a very convenient tool for studying the topography of different samples, but it can also be used to measure specific binding forces between molecules. For this purpose, one type of molecule is attached to the tip and the other one to the substrate. Approaching the tip to the substrate allows the molecules to bind together. Retracting the tip breaks the newly formed bond. The rupture of a specific bond appears in the force-distance curves as a spike from which the binding force can be deduced. In this article we present an algorithm to automatically process force-distance curves in order to obtain bond strength histograms. The algorithm is based on a fuzzy logic approach that permits an evaluation of "quality" for every event and makes the detection procedure much faster compared to a manual selection. In this article, the software has been applied to measure the binding strength between tubuline and microtubuline associated proteins.

  13. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell was designed and tested to deliver high capacity at steady discharge rates up to and including a C rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet of any type in a 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters and performance are described. Also covered is an episode of capacity fading due to electrode swelling and its successful recovery by means of additional activation procedures.

  14. Measuring Specific Heats at High Temperatures

    NASA Technical Reports Server (NTRS)

    Vandersande, Jan W.; Zoltan, Andrew; Wood, Charles

    1987-01-01

    Flash apparatus for measuring thermal diffusivities at temperatures from 300 to 1,000 degrees C modified; measures specific heats of samples to accuracy of 4 to 5 percent. Specific heat and thermal diffusivity of sample measured. Xenon flash emits pulse of radiation, absorbed by sputtered graphite coating on sample. Sample temperature measured with thermocouple, and temperature rise due to pulse measured by InSb detector.

  15. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    PubMed Central

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-01-01

    The purpose of this study was to investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for 7 disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head & neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and Monte Carlo algorithms to obtain the average range differences (ARD) and root mean square deviation (RMSD) for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation (ADD) of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing Monte Carlo dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head & neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be needed for breast, lung and head & neck treatments. We conclude that currently used generic range uncertainty margins in proton therapy should be redefined site specific and that complex geometries may require a field specific

  16. High-Speed General Purpose Genetic Algorithm Processor.

    PubMed

    Hoseini Alinodehi, Seyed Pourya; Moshfe, Sajjad; Saber Zaeimian, Masoumeh; Khoei, Abdollah; Hadidi, Khairollah

    2016-07-01

    In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously.

  17. Robust Optimization Design Algorithm for High-Frequency TWTs

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chevalier, Christine T.

    2010-01-01

    Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.

  18. An effective algorithm for the generation of patient-specific Purkinje networks in computational electrocardiology

    NASA Astrophysics Data System (ADS)

    Palamara, Simone; Vergara, Christian; Faggiano, Elena; Nobile, Fabio

    2015-02-01

    The Purkinje network is responsible for the fast and coordinated distribution of the electrical impulse in the ventricle that triggers its contraction. Therefore, it is necessary to model its presence to obtain an accurate patient-specific model of the ventricular electrical activation. In this paper, we present an efficient algorithm for the generation of a patient-specific Purkinje network, driven by measures of the electrical activation acquired on the endocardium. The proposed method provides a correction of an initial network, generated by means of a fractal law, and it is based on the solution of Eikonal problems both in the muscle and in the Purkinje network. We present several numerical results both in an ideal geometry with synthetic data and in a real geometry with patient-specific clinical measures. These results highlight an improvement of the accuracy provided by the patient-specific Purkinje network with respect to the initial one. In particular, a cross-validation test shows an accuracy increase of 19% when only the 3% of the total points are used to generate the network, whereas an increment of 44% is observed when a random noise equal to 20% of the maximum value of the clinical data is added to the measures.

  19. The McCollough Facial Rejuvenation System: expanding the scope of a condition-specific algorithm.

    PubMed

    McCollough, E Gaylon; Ha, Chi D

    2012-02-01

    The ideal facial rejuvenation algorithm is comprised of an appropriate combination of procedures, thoughtfully chosen from an assortment of reliable alternatives, that when skillfully performed provide both short- and long-term enhancement to the undesirable conditions of aging that exists at the time of treatment. In 2010, the senior author published the first scientific article in which a condition-specific classification system and a treatment plan algorithm were applied to the discipline of facial rejuvenation. In the landmark article, the senior author reviewed his surgical experience of more than 5000 face-lifts and grouped patients into five major categories (or stages), based upon the extent of aging identified in various regions of the face and neck and the procedures performed to correct them. The criteria (that have now been suggested on a facial aging worksheet) were recorded in a data blank comprised of a first-generation worksheet. Once the data were collected--and using algorithmic charts for each region and/or facial feature--the most appropriate plan of action for a given patient was created. The sole objective in sharing the senior author's methodology was to launch a scholarly discussion among physicians and surgeons involved in the various disciplines that provide rejuvenation procedures on the face, head, and neck. From such a debate would, hopefully, emerge a definitive algorithmic system--one based squarely on the venerable ethics of medicine, coupled with the appropriate application of and skillful performance of the fundamental principles of surgery. A single, science-based system would restore order to a noble discipline, currently being challenged by narcissism, gimmickry, and commercialization. The implementation of a system rooted in universal truths would require its advocates to agree upon a common "language," the implementation of which allows aesthetically focused surgeons to share both new ideas and time-tested experiences. More

  20. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea

  1. GPQuest: A Spectral Library Matching Algorithm for Site-Specific Assignment of Tandem Mass Spectra to Intact N-glycopeptides.

    PubMed

    Toghi Eshghi, Shadi; Shah, Punit; Yang, Weiming; Li, Xingde; Zhang, Hui

    2015-01-01

    Glycoprotein changes occur in not only protein abundance but also the occupancy of each glycosylation site by different glycoforms during biological or pathological processes. Recent advances in mass spectrometry instrumentation and techniques have facilitated analysis of intact glycopeptides in complex biological samples by allowing the users to generate spectra of intact glycopeptides with glycans attached to each specific glycosylation site. However, assigning these spectra, leading to identification of the glycopeptides, is challenging. Here, we report an algorithm, named GPQuest, for site-specific identification of intact glycopeptides using higher-energy collisional dissociation (HCD) fragmentation of complex samples. In this algorithm, a spectral library of glycosite-containing peptides in the sample was built by analyzing the isolated glycosite-containing peptides using HCD LC-MS/MS. Spectra of intact glycopeptides were selected by using glycan oxonium ions as signature ions for glycopeptide spectra. These oxonium-ion-containing spectra were then compared with the spectral library generated from glycosite-containing peptides, resulting in assignment of each intact glycopeptide MS/MS spectrum to a specific glycosite-containing peptide. The glycan occupying each glycosite was determined by matching the mass difference between the precursor ion of intact glycopeptide and the glycosite-containing peptide to a glycan database. Using GPQuest, we analyzed LC-MS/MS spectra of protein extracts from prostate tumor LNCaP cells. Without enrichment of glycopeptides from global tryptic peptides and at a false discovery rate of 1%, 1008 glycan-containing MS/MS spectra were assigned to 769 unique intact N-linked glycopeptides, representing 344 N-linked glycosites with 57 different N-glycans. Spectral library matching using GPQuest assigns the HCD LC-MS/MS generated spectra of intact glycopeptides in an automated and high-throughput manner. Additionally, spectral library

  2. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: sensitivity and specificity analysis.

    PubMed

    Kapp, Eugene A; Schütz, Frédéric; Connolly, Lisa M; Chakel, John A; Meza, Jose E; Miller, Christine A; Fenyo, David; Eng, Jimmy K; Adkins, Joshua N; Omenn, Gilbert S; Simpson, Richard J

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X!Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, PeptideProphet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X!Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of "consensus scoring", i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  3. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  4. High specific activity platinum-195m

    SciTech Connect

    Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.

    2004-10-12

    A new composition of matter includes .sup.195m Pt characterized by a specific activity of at least 30 mCi/mg Pt, generally made by method that includes the steps of: exposing .sup.193 Ir to a flux of neutrons sufficient to convert a portion of the .sup.193 Ir to .sup.195m Pt to form an irradiated material; dissolving the irradiated material to form an intermediate solution comprising Ir and Pt; and separating the Pt from the Ir by cation exchange chromatography to produce .sup.195m Pt.

  5. Parallel algorithms for high-speed SAR processing

    NASA Astrophysics Data System (ADS)

    Mallorqui, Jordi J.; Bara, Marc; Broquetas, Antoni; Wis, Mariano; Martinez, Antonio; Nogueira, Leonardo; Moreno, Victoriano

    1998-11-01

    The mass production of SAR products and its usage on monitoring emergency situations (oil spill detection, floods, etc.) requires high-speed SAR processors. Two different parallel strategies for near real time SAR processing based on a multiblock version of the Chirp Scaling Algorithm (CSA) have been studied. The first one is useful for small companies that would like to reduce computation times with no extra investment. It uses a cluster of heterogeneous UNIX workstations as a parallel computer. The second one is oriented to institutions, which have to process large amounts of data in short times and can afford the cost of large parallel computers. The parallel programming has reduced in both cases the computational times when compared with the sequential versions.

  6. Sensitivity of snow density and specific surface area measured by microtomography to different image processing algorithms

    NASA Astrophysics Data System (ADS)

    Hagenmuller, Pascal; Matzl, Margret; Chambon, Guillaume; Schneebeli, Martin

    2016-05-01

    Microtomography can measure the X-ray attenuation coefficient in a 3-D volume of snow with a spatial resolution of a few microns. In order to extract quantitative characteristics of the microstructure, such as the specific surface area (SSA), from these data, the greyscale image first needs to be segmented into a binary image of ice and air. Different numerical algorithms can then be used to compute the surface area of the binary image. In this paper, we report on the effect of commonly used segmentation and surface area computation techniques on the evaluation of density and specific surface area. The evaluation is based on a set of 38 X-ray tomographies of different snow samples without impregnation, scanned with an effective voxel size of 10 and 18 μm. We found that different surface area computation methods can induce relative variations up to 5 % in the density and SSA values. Regarding segmentation, similar results were obtained by sequential and energy-based approaches, provided the associated parameters were correctly chosen. The voxel size also appears to affect the values of density and SSA, but because images with the higher resolution also show the higher noise level, it was not possible to draw a definitive conclusion on this effect of resolution.

  7. Longitudinal Algorithms to Estimate Cardiorespiratory Fitness: Associations with Nonfatal Cardiovascular Disease and Disease-Specific Mortality

    PubMed Central

    Artero, Enrique G.; Jackson, Andrew S.; Sui, Xuemei; Lee, Duck-chul; O’Connor, Daniel P.; Lavie, Carl J.; Church, Timothy S.; Blair, Steven N.

    2014-01-01

    Objective To predict risk for non-fatal cardiovascular disease (CVD) and disease-specific mortality using CRF algorithms that do not involve exercise testing. Background Cardiorespiratory fitness (CRF) is not routinely measured, as it requires trained personnel and specialized equipment. Methods Participants were 43,356 adults (21% women) from the Aerobics Center Longitudinal Study followed between 1974 and 2003. Estimated CRF was based on sex, age, body mass index, waist circumference, resting heart rate, physical activity level and smoking status. Actual CRF was measured by a maximal treadmill test. Results During a median follow-up of 14.5 years, 1,934 deaths occurred, 627 due to CVD. In a sub-sample of 18,095 participants, 1,049 cases of non-fatal CVD events were ascertained. After adjusting for potential confounders, both measured CRF and estimated CRF were inversely associated with risk of all-cause mortality, CVD mortality and non-fatal CVD incidence in men, and with all-cause mortality and non-fatal CVD in women. The risk reduction per 1-metabolic equivalent (MET) increase ranged approximately from 10 to 20 %. Measured CRF had a slightly better discriminative ability (c-statistic) than estimated CRF, and the net reclassification improvement (NRI) of measured CRF vs. estimated CRF was 12.3% in men (p<0.05) and 19.8% in women (p<0.001). Conclusions These algorithms utilize information routinely collected to obtain an estimate of CRF that provides a valid indication of health status. In addition to identifying people at risk, this method can provide more appropriate exercise recommendations that reflect initial CRF levels. PMID:24703924

  8. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  9. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  10. Heuristic-based scheduling algorithm for high level synthesis

    NASA Technical Reports Server (NTRS)

    Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye

    1992-01-01

    A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.

  11. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty

    DOE PAGES

    Ling, Julia; Templeton, Jeremy Alan

    2015-08-04

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less

  12. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty

    SciTech Connect

    Ling, Julia; Templeton, Jeremy Alan

    2015-08-04

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.

  13. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over

  14. High School Educational Specifications: Facilities Planning Standards. Edition I.

    ERIC Educational Resources Information Center

    Jefferson County School District R-1, Denver, CO.

    The Jefferson County School District (Colorado) has developed a manual of high school specifications for Design Advisory Groups and consultants to use for planning and designing the district's high school facilities. The specifications are provided to help build facilities that best meet the educational needs of the students to be served.…

  15. Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.

  16. A high-performance FFT algorithm for vector supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1988-01-01

    Many traditional algorithms for computing the fast Fourier transform (FFT) on conventional computers are unacceptable for advanced vector and parallel computers because they involve nonunit, power-of-two memory strides. A practical technique for computing the FFT that avoids all such strides and appears to be near-optimal for a variety of current vector and parallel computers is presented. Performance results of a program based on this technique are given. Notable among these results is that a FORTRAN implementation of this algorithm on the CRAY-2 runs up to 77-percent faster than Cray's assembly-coded library routine.

  17. General purpose versus special algorithms for high-speed flows with shocks

    NASA Astrophysics Data System (ADS)

    Sai, B. V. K. Satya; Zienkiewicz, O. C.; Manzari, M. T.; Lyra, P. R. M.; Morgan, K.

    1998-01-01

    In this paper we compare the performance of a new general algorithm developed recently in application to problems of high Mach number flows with the performance of specialised algorithms applicable only to such flows. It appears that the results for most examples compare well, the biggest difference occurring in that of high Mach number compression corner.

  18. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  19. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  20. Stride search: A general algorithm for storm detection in high resolution climate data

    SciTech Connect

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.

  1. Randomized algorithms for stability and robustness analysis of high-speed communication networks.

    PubMed

    Alpcan, Tansu; Başar, Tamer; Tempo, Roberto

    2005-09-01

    This paper initiates a study toward developing and applying randomized algorithms for stability of high-speed communication networks. The focus is on congestion and delay-based flow controllers for sources, which are "utility maximizers" for individual users. First, we introduce a nonlinear algorithm for such source flow controllers, which uses as feedback aggregate congestion and delay information from bottleneck nodes of the network, and depends on a number of parameters, among which are link capacities, user preference for utility, and pricing. We then linearize this nonlinear model around its unique equilibrium point and perform a robustness analysis for a special symmetric case with a single bottleneck node. The "symmetry" here captures the scenario when certain utility and pricing parameters are the same across all active users, for which we derive closed-form necessary and sufficient conditions for stability and robustness under parameter variations. In addition, the ranges of values for the utility and pricing parameters for which stability is guaranteed are computed exactly. These results also admit counterparts for the case when the pricing parameters vary across users, but the utility parameter values are still the same. In the general nonsymmetric case, when closed-form derivation is not possible, we construct specific randomized algorithms which provide a probabilistic estimate of the local stability of the network. In particular, we use Monte Carlo as well as quasi-Monte Carlo techniques for the linearized model. The results obtained provide a complete analysis of congestion control algorithms for internet style networks with a single bottleneck node as well as for networks with general random topologies. PMID:16252829

  2. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  3. SU-E-T-305: Study of the Eclipse Electron Monte Carlo Algorithm for Patient Specific MU Calculations

    SciTech Connect

    Wang, X; Qi, S; Agazaryan, N; DeMarco, J

    2014-06-01

    Purpose: To evaluate the Eclipse electron Monte Carlo (eMC) algorithm based on patient specific monitor unit (MU) calculations, and to propose a new factor which quantitatively predicts the discrepancy of MUs between the eMC algorithm and hand calculations. Methods: Electron treatments were planned for 61 patients on Eclipse (Version 10.0) using the eMC algorithm for Varian TrueBeam linear accelerators. For each patient, the same treatment beam angle was kept for a point dose calculation at dmax performed with the reference condition, which used an open beam with a 15×15 cm2 size cone and 100 SSD. A patient specific correction factor (PCF) was obtained by getting the ratio between this point dose and the calibration dose, which is 1 cGy per MU delivered at dmax. The hand calculation results were corrected by the PCFs and compared with MUs from the treatment plans. Results: The MU from the treatment plans were in average (7.1±6.1)% higher than the hand calculations. The average MU difference between the corrected hand calculations and the eMC treatment plans was (0.07±3.48)%. A correlation coefficient of 0.8 was found between (1-PCF) and the percentage difference between the treatment plan and hand calculations. Most outliers were treatment plans with small beam opening (< 4 cm) and low energy beams (6 and 9 MeV). Conclusion: For CT-based patient treatment plans, the eMC algorithm tends to generate a larger MU than hand calculations. Caution should be taken for eMC patient plans with small field sizes and low energy beams. We hypothesize that the PCF ratio reflects the influence of patient surface curvature and tissue inhomogeneity to patient specific percent depth dose (PDD) curve and MU calculations in eMC algorithm.

  4. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  5. A new adaptive GMRES algorithm for achieving high accuracy

    SciTech Connect

    Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.

    1996-12-31

    GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.

  6. ASYMPTOTICALLY OPTIMAL HIGH-ORDER ACCURATE ALGORITHMS FOR THE SOLUTION OF CERTAIN ELLIPTIC PDEs

    SciTech Connect

    Leonid Kunyansky, PhD

    2008-11-26

    The main goal of the project, "Asymptotically Optimal, High-Order Accurate Algorithms for the Solution of Certain Elliptic PDE's" (DE-FG02-03ER25577) was to develop fast, high-order algorithms for the solution of scattering problems and spectral problems of photonic crystals theory. The results we obtained lie in three areas: (1) asymptotically fast, high-order algorithms for the solution of eigenvalue problems of photonics, (2) fast, high-order algorithms for the solution of acoustic and electromagnetic scattering problems in the inhomogeneous media, and (3) inversion formulas and fast algorithms for the inverse source problem for the acoustic wave equation, with applications to thermo- and opto- acoustic tomography.

  7. Trajectories for High Specific Impulse High Specific Power Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, Tara; Adams, Robert B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Flight times and deliverable masses for electric and fusion propulsion systems are difficult to approximate. Numerical integration is required for these continuous thrust systems. Many scientists are not equipped with the tools and expertise to conduct interplanetary and interstellar trajectory analysis for their concepts. Several charts plotting the results of well-known trajectory simulation codes were developed and are contained in this paper. These charts illustrate the dependence of time of flight and payload ratio on jet power, initial mass, specific impulse and specific power. These charts are intended to be a tool by which people in the propulsion community can explore the possibilities of their propulsion system concepts. Trajectories were simulated using the tools VARITOP and IPOST. VARITOP is a well known trajectory optimization code that involves numerical integration based on calculus of variations. IPOST has several methods of trajectory simulation; the one used in this paper is Cowell's method for full integration of the equations of motion. An analytical method derived in the companion paper was also evaluated. The accuracy of this method is discussed in the paper.

  8. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  9. Automated coronary artery calcium scoring from non-contrast CT using a patient-specific algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Xiaowei; Slomka, Piotr J.; Diaz-Zamudio, Mariana; Germano, Guido; Berman, Daniel S.; Terzopoulos, Demetri; Dey, Damini

    2015-03-01

    Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.

  10. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    NASA Astrophysics Data System (ADS)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  11. Performing target specific band reduction using artificial neural networks and assessment of its efficacy using various target detection algorithms

    NASA Astrophysics Data System (ADS)

    Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.

    2016-04-01

    Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target detection is verified. The results of ANN proposes discriminating band range for targets; these ranges were further used to perform target detection using four popular spectral matching target detection algorithm. Further, the results of algorithms were analyzed using ROC curves to evaluate the effectiveness of the ranges suggested by ANN over full spectrum for detection of desired targets. In addition, comparative assessment of algorithms is also performed using ROC.

  12. Specific volume coupling and convergence properties in hybrid particle/finite volume algorithms for turbulent reactive flows

    NASA Astrophysics Data System (ADS)

    Popov, Pavel P.; Wang, Haifeng; Pope, Stephen B.

    2015-08-01

    We investigate the coupling between the two components of a Large Eddy Simulation/Probability Density Function (LES/PDF) algorithm for the simulation of turbulent reacting flows. In such an algorithm, the Large Eddy Simulation (LES) component provides a solution to the hydrodynamic equations, whereas the Lagrangian Monte Carlo Probability Density Function (PDF) component solves for the PDF of chemical compositions. Special attention is paid to the transfer of specific volume information from the PDF to the LES code: the specific volume field contains probabilistic noise due to the nature of the Monte Carlo PDF solution, and thus the use of the specific volume field in the LES pressure solver needs careful treatment. Using a test flow based on the Sandia/Sydney Bluff Body Flame, we determine the optimal strategy for specific volume feedback. Then, the overall second-order convergence of the entire LES/PDF procedure is verified using a simple vortex ring test case, with special attention being given to bias errors due to the number of particles per LES Finite Volume (FV) cell.

  13. Enhanced detection criteria in implantable cardioverter defibrillators: sensitivity and specificity of the stability algorithm at different heart rates.

    PubMed

    Kettering, K; Dörnberger, V; Lang, R; Vonthein, R; Suchalla, R; Bosch, R F; Mewis, C; Eigenberger, B; Kühlkamp, V

    2001-09-01

    The lack of specificity in the detection of ventricular tachyarrhythmias remains a major clinical problem in the therapy with ICDs. The stability criterion has been shown to be useful in discriminating ventricular tachyarrhythmias characterized by a small variation in cycle lengths from AF with rapid ventricular response presenting a higher degree of variability of RR intervals. But RR variability decreases with increasing heart rate during AF. Therefore, the aim of the study was to determine if the sensitivity and specificity of the STABILITY algorithm for spontaneous tachyarrhythmias is related to ventricular rate. Forty-two patients who had received an ICD (CPI Ventak Mini I, II, III or Ventak AV) were enrolled in the study. Two hundred ninety-eight episodes of AF with rapid ventricular response and 817 episodes of ventricular tachyarrhythmias were analyzed. Sensitivity and specificity in the detection of ventricular tachyarrhythmias were calculated at different heart rates. When a stability value of 30 ms was programmed the result was a sensitivity of 82.7% and a specificity of 91.4% in the detection of slow ventricular tachyarrhythmias (heart rate < 150 beats/min). When faster ventricular tachyarrhythmias with rates between 150 and 169 beats/min (170-189 beats/min) were analyzed, a stability value of 30 ms provided a sensitivity of 94.5% (94.7%) and a specificity of 76.5% (54.0%). For arrhythmia episodes > or = 190 beats/min, the same stability value resulted in a sensitivity of 78.2% and a specificity of 41.0%. Even when other stability values were taken into consideration, no acceptable sensitivity/specificity values could be obtained in this subgroup. RR variability decreases with increasing heart rate during AF while RR variability remains almost constant at different cycle lengths during ventricular tachyarrhythmias. Thus, acceptable performance of the STABILITY algorithm appears to be limited to ventricular rate zones < 170 beats/min.

  14. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  15. Double images encryption method with resistance against the specific attack based on an asymmetric algorithm.

    PubMed

    Wang, Xiaogang; Zhao, Daomu

    2012-05-21

    A double-image encryption technique that based on an asymmetric algorithm is proposed. In this method, the encryption process is different from the decryption and the encrypting keys are also different from the decrypting keys. In the nonlinear encryption process, the images are encoded into an amplitude cyphertext, and two phase-only masks (POMs) generated based on phase truncation are kept as keys for decryption. By using the classical double random phase encoding (DRPE) system, the primary images can be collected by an intensity detector that located at the output plane. Three random POMs that applied in the asymmetric encryption can be safely applied as public keys. Simulation results are presented to demonstrate the validity and security of the proposed protocol.

  16. High specific energy and specific power aluminum/air battery for micro air vehicles

    NASA Astrophysics Data System (ADS)

    Kindler, A.; Matthies, L.

    2014-06-01

    Micro air vehicles developed under the Army's Micro Autonomous Systems and Technology program generally need a specific energy of 300 - 550 watt-hrs/kg and 300 -550 watts/kg to operate for about 1 hour. At present, no commercial cell can fulfill this need. The best available commercial technology is the Lithium-ion battery or its derivative, the Li- Polymer cell. This chemistry generally provides around 15 minutes flying time. One alternative to the State-of-the Art is the Al/air cell, a primary battery that is actually half fuel cell. It has a high energy battery like aluminum anode, and fuel cell like air electrode that can extract oxygen out of the ambient air rather than carrying it. Both of these features tend to contribute to a high specific energy (watt-hrs/kg). High specific power (watts/kg) is supported by high concentration KOH electrolyte, a high quality commercial air electrode, and forced air convection from the vehicles rotors. The performance of this cell with these attributes is projected to be 500 watt-hrs/kg and 500 watts/kg based on simple model. It is expected to support a flying time of approximately 1 hour in any vehicle in which the usual limit is 15 minutes.

  17. Statistical classification techniques in high energy physics (SDDT algorithm)

    NASA Astrophysics Data System (ADS)

    Bouř, Petr; Kůs, Václav; Franc, Jiří

    2016-08-01

    We present our proposal of the supervised binary divergence decision tree with nested separation method based on the generalized linear models. A key insight we provide is the clustering driven only by a few selected physical variables. The proper selection consists of the variables achieving the maximal divergence measure between two different classes. Further, we apply our method to Monte Carlo simulations of physics processes corresponding to a data sample of top quark-antiquark pair candidate events in the lepton+jets decay channel. The data sample is produced in pp̅ collisions at √S = 1.96 TeV. It corresponds to an integrated luminosity of 9.7 fb-1 recorded with the D0 detector during Run II of the Fermilab Tevatron Collider. The efficiency of our algorithm achieves 90% AUC in separating signal from background. We also briefly deal with the modification of statistical tests applicable to weighted data sets in order to test homogeneity of the Monte Carlo simulations and measured data. The justification of these modified tests is proposed through the divergence tests.

  18. A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.

    2014-01-01

    We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.

  19. The evolutionary development of high specific impulse electric thruster technology

    NASA Technical Reports Server (NTRS)

    Sovey, James S.; Hamley, John A.; Patterson, Michael J.; Rawlin, Vincent K.; Myers, Roger M.

    1992-01-01

    Electric propulsion flight and technology demonstrations conducted primarily by Europe, Japan, China, the U.S., and the USSR are reviewed. Evolutionary mission applications for high specific impulse electric thruster systems are discussed, and the status of arcjet, ion, and magnetoplasmadynamic thrusters and associated power processor technologies are summarized.

  20. Algorithms and architectures for high performance analysis of semantic graphs.

    SciTech Connect

    Hendrickson, Bruce Alan

    2005-09-01

    analysis. Since intelligence datasets can be extremely large, the focus of this work is on the use of parallel computers. We have been working to develop scalable parallel algorithms that will be at the core of a semantic graph analysis infrastructure. Our work has involved two different thrusts, corresponding to two different computer architectures. The first architecture of interest is distributed memory, message passing computers. These machines are ubiquitous and affordable, but they are challenging targets for graph algorithms. Much of our distributed-memory work to date has been collaborative with researchers at Lawrence Livermore National Laboratory and has focused on finding short paths on distributed memory parallel machines. Our implementation on 32K processors of BlueGene/Light finds shortest paths between two specified vertices in just over a second for random graphs with 4 billion vertices.

  1. Experiences with the hydraulic design of the high specific speed Francis turbine

    NASA Astrophysics Data System (ADS)

    Obrovsky, J.; Zouhar, J.

    2014-03-01

    The high specific speed Francis turbine is still suitable alternative for refurbishment of older hydro power plants with lower heads and worse cavitation conditions. In the paper the design process of such kind of turbine together with the results comparison of homological model tests performed in hydraulic laboratory of ČKD Blansko Engineering is introduced. The turbine runner was designed using the optimization algorithm and considering the high specific speed hydraulic profile. It means that hydraulic profiles of the spiral case, the distributor and the draft tube were used from a Kaplan turbine. The optimization was done as the automatic cycle and was based on a simplex optimization method as well as on a genetic algorithm. The number of blades is shown as the parameter which changes the resulting specific speed of the turbine between ns=425 to 455 together with the cavitation characteristics. Minimizing of cavitation on the blade surface as well as on the inlet edge of the runner blade was taken into account during the design process. The results of CFD analyses as well as the model tests are mentioned in the paper.

  2. Highly specific protein-protein interactions, evolution and negative design.

    PubMed

    Sear, Richard P

    2004-12-01

    We consider highly specific protein-protein interactions in proteomes of simple model proteins. We are inspired by the work of Zarrinpar et al (2003 Nature 426 676). They took a binding domain in a signalling pathway in yeast and replaced it with domains of the same class but from different organisms. They found that the probability of a protein binding to a protein from the proteome of a different organism is rather high, around one half. We calculate the probability of a model protein from one proteome binding to the protein of a different proteome. These proteomes are obtained by sampling the space of functional proteomes uniformly. In agreement with Zarrinpar et al we find that the probability of a protein binding a protein from another proteome is rather high, of order one tenth. Our results, together with those of Zarrinpar et al, suggest that designing, say, a peptide to block or reconstitute a single signalling pathway, without affecting any other pathways, requires knowledge of all the partners of the class of binding domains the peptide is designed to mimic. This knowledge is required to use negative design to explicitly design out interactions of the peptide with proteins other than its target. We also found that patches that are required to bind with high specificity evolve more slowly than those that are required only to not bind to any other patch. This is consistent with some analysis of sequence data for proteins engaged in highly specific interactions.

  3. Dose prediction accuracy of anisotropic analytical algorithm and pencil beam convolution algorithm beyond high density heterogeneity interface

    PubMed Central

    Rana, Suresh B.

    2013-01-01

    Purpose: It is well known that photon beam radiation therapy requires dose calculation algorithms. The objective of this study was to measure and assess the ability of pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) to predict doses beyond high density heterogeneity. Materials and Methods: An inhomogeneous phantom of five layers was created in Eclipse planning system (version 8.6.15). Each layer of phantom was assigned in terms of water (first or top), air (second), water (third), bone (fourth), and water (fifth or bottom) medium. Depth doses in water (bottom medium) were calculated for 100 monitor units (MUs) with 6 Megavoltage (MV) photon beam for different field sizes using AAA and PBC with heterogeneity correction. Combinations of solid water, Poly Vinyl Chloride (PVC), and Styrofoam were then manufactured to mimic phantoms and doses for 100 MUs were acquired with cylindrical ionization chamber at selected depths beyond high density heterogeneity interface. The measured and calculated depth doses were then compared. Results: AAA's values had better agreement with measurements at all measured depths. Dose overestimation by AAA (up to 5.3%) and by PBC (up to 6.7%) was found to be higher in proximity to the high-density heterogeneity interface, and the dose discrepancies were more pronounced for larger field sizes. The errors in dose estimation by AAA and PBC may be due to improper beam modeling of primary beam attenuation or lateral scatter contributions or combination of both in heterogeneous media that include low and high density materials. Conclusions: AAA is more accurate than PBC for dose calculations in treating deep-seated tumor beyond high-density heterogeneity interface. PMID:24455541

  4. Application of a Modified Garbage Code Algorithm to Estimate Cause-Specific Mortality and Years of Life Lost in Korea

    PubMed Central

    2016-01-01

    Years of life lost (YLLs) are estimated based on mortality and cause of death (CoD); therefore, it is necessary to accurately calculate CoD to estimate the burden of disease. The garbage code algorithm was developed by the Global Burden of Disease (GBD) Study to redistribute inaccurate CoD and enhance the validity of CoD estimation. This study aimed to estimate cause-specific mortality rates and YLLs in Korea by applying a modified garbage code algorithm. CoD data for 2010–2012 were used to calculate the number of deaths. The garbage code algorithm was then applied to calculate target cause (i.e., valid CoD) and adjusted CoD using the garbage code redistribution. The results showed that garbage code deaths accounted for approximately 25% of all CoD during 2010–2012. In 2012, lung cancer contributed the most to cause-specific death according to the Statistics Korea. However, when CoD was adjusted using the garbage code redistribution, ischemic heart disease was the most common CoD. Furthermore, before garbage code redistribution, self-harm contributed the most YLLs followed by lung cancer and liver cancer; however, after application of the garbage code redistribution, though self-harm was the most common leading cause of YLL, it is followed by ischemic heart disease and lung cancer. Our results showed that garbage code deaths accounted for a substantial amount of mortality and YLLs. The results may enhance our knowledge of burden of disease and help prioritize intervention settings by changing the relative importance of burden of disease. PMID:27775249

  5. Phase-unwrapping algorithm for images with high noise content based on a local histogram

    NASA Astrophysics Data System (ADS)

    Meneses, Jaime; Gharbi, Tijani; Humbert, Philippe

    2005-03-01

    We present a robust algorithm of phase unwrapping that was designed for use on phase images with high noise content. We proceed with the algorithm by first identifying regions with continuous phase values placed between fringe boundaries in an image and then phase shifting the regions with respect to one another by multiples of 2pi to unwrap the phase. Image pixels are segmented between interfringe and fringe boundary areas by use of a local histogram of a wrapped phase. The algorithm has been used successfully to unwrap phase images generated in a three-dimensional shape measurement for noninvasive quantification of human skin structure in dermatology, cosmetology, and plastic surgery.

  6. Adaptive algorithm for active control of high-amplitude acoustic field in resonator

    NASA Astrophysics Data System (ADS)

    Červenka, M.; Bednařík, M.; Koníček, P.

    2008-06-01

    This work is concerned with suppression of nonlinear effects in piston-driven acoustic resonators by means of two-frequency driving technique. An iterative adaptive algorithm is proposed to calculate parameters of the driving signal in order that amplitude of the second harmonics of the acoustic pressure is minimized. Functionality of the algorithm is verified firstly by means of numerical model and secondly, it is used in real computer-controlled experiment. The numerical and experimental results show that the proposed algorithm can be successfully used for generation of high-amplitude shock-free acoustic field in resonators.

  7. Phase-unwrapping algorithm for images with high noise content based on a local histogram.

    PubMed

    Meneses, Jaime; Gharbi, Tijani; Humbert, Philippe

    2005-03-01

    We present a robust algorithm of phase unwrapping that was designed for use on phase images with high noise content. We proceed with the algorithm by first identifying regions with continuous phase values placed between fringe boundaries in an image and then phase shifting the regions with respect to one another by multiples of 2pi to unwrap the phase. Image pixels are segmented between interfringe and fringe boundary areas by use of a local histogram of a wrapped phase. The algorithm has been used successfully to unwrap phase images generated in a three-dimensional shape measurement for noninvasive quantification of human skin structure in dermatology, cosmetology, and plastic surgery.

  8. Patient non-specific algorithm for seizures detection in scalp EEG.

    PubMed

    Orosco, Lorena; Correa, Agustina Garcés; Diez, Pablo; Laciar, Eric

    2016-04-01

    Epilepsy is a brain disorder that affects about 1% of the population in the world. Seizure detection is an important component in both the diagnosis of epilepsy and seizure control. In this work a patient non-specific strategy for seizure detection based on Stationary Wavelet Transform of EEG signals is developed. A new set of features is proposed based on an average process. The seizure detection consisted in finding the EEG segments with seizures and their onset and offset points. The proposed offline method was tested in scalp EEG records of 24-48h of duration of 18 epileptic patients. The method reached mean values of specificity of 99.9%, sensitivity of 87.5% and a false positive rate per hour of 0.9.

  9. Application of multiple imputation using the two-fold fully conditional specification algorithm in longitudinal clinical data

    PubMed Central

    Welch, Catherine; Bartlett, Jonathan; Petersen, Irene

    2014-01-01

    Electronic health records of longitudinal clinical data are a valuable resource for health care research. One obstacle of using databases of health records in epidemiological analyses is that general practitioners mainly record data if they are clinically relevant. We can use existing methods to handle missing data, such as multiple imputation (mi), if we treat the unavailability of measurements as a missing-data problem. Most software implementations of MI do not take account of the longitudinal and dynamic structure of the data and are difficult to implement in large databases with millions of individuals and long follow-up. Nevalainen, Kenward, and Virtanen (2009, Statistics in Medicine 28: 3657–3669) proposed the two-fold fully conditional specification algorithm to impute missing data in longitudinal data. It imputes missing values at a given time point, conditional on information at the same time point and immediately adjacent time points. In this article, we describe a new command, twofold, that implements the two-fold fully conditional specification algorithm. It is extended to accommodate MI of longitudinal clinical records in large databases. PMID:25420071

  10. Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing

    NASA Astrophysics Data System (ADS)

    Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.

    2007-05-01

    uniquely powerful computing power of the MareNostrum supercomputer in Barcelona to realize the promise of RTM, incorporate it into daily processing flows, and to help solve exploration problems in a highly cost-effective way. Uniquely, the Kaleidoscope Project is simultaneously integrating software (algorithms) and hardware (Cell BE), steps that are traditionally taken sequentially. This unique integration of software and hardware will accelerate seismic imaging by several orders of magnitude compared to conventional solutions running on standard Linux Clusters.

  11. An end-to-end workflow for engineering of biological networks from high-level specifications.

    PubMed

    Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun

    2012-08-17

    We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells. PMID:23651286

  12. An end-to-end workflow for engineering of biological networks from high-level specifications.

    PubMed

    Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun

    2012-08-17

    We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.

  13. High specificity in plant leaf metabolic responses to arbuscular mycorrhiza.

    PubMed

    Schweiger, Rabea; Baier, Markus C; Persicke, Marcus; Müller, Caroline

    2014-05-22

    The chemical composition of plants (phytometabolome) is dynamic and modified by environmental factors. Understanding its modulation allows to improve crop quality and decode mechanisms underlying plant-pest interactions. Many studies that investigate metabolic responses to the environment focus on single model species and/or few target metabolites. However, comparative studies using environmental metabolomics are needed to evaluate commonalities of chemical responses to certain challenges. We assessed the specificity of foliar metabolic responses of five plant species to the widespread, ancient symbiosis with a generalist arbuscular mycorrhizal fungus. Here we show that plant species share a large 'core metabolome' but nevertheless the phytometabolomes are modulated highly species/taxon-specifically. Such a low conservation of responses across species highlights the importance to consider plant metabolic prerequisites and the long time of specific plant-fungus coevolution. Thus, the transferability of findings regarding phytometabolome modulation by an identical AM symbiont is severely limited even between closely related species.

  14. Parallelization and Algorithmic Enhancements of High Resolution IRAS Image Construction

    NASA Technical Reports Server (NTRS)

    Cao, Yu; Prince, Thomas A.; Tereby, Susan; Beichman, Charles A.

    1996-01-01

    The Infrared Astronomical Satellite caried out a nearly complete survey of the infrared sky, and the survey data are important for the study of many astrophysical phenomena. However, many data sets at other wavelengths have higher resolutions than that of the co-added IRAS maps, and high resolution IRAS images are strongly desired both for their own information content and their usefulness in correlation. The HIRES program was developed by the Infrared Processing and Analysis Center (IPAC) to produce high resolution (approx. 1') images from IRAS data using the Maximum Correlation Method (MCM). We describe the port of HIRES to the Intel Paragon, a massively parallel supercomputer, other software developments for mass production of HIRES images, and the IRAS Galaxy Atlas, a project to map the Galactic plane at 60 and 100(micro)m.

  15. The evolutionary development of high specific impulse electric thruster technology

    NASA Technical Reports Server (NTRS)

    Sovey, James S.; Hamley, John A.; Patterson, Michael J.; Rawlin, Vincent K.; Myers, Roger M.

    1992-01-01

    Electric propulsion flight and technology demonstrations conducted in the USA, Europe, Japan, China, and USSR are reviewed with reference to the major flight qualified electric propulsion systems. These include resistojets, ion thrusters, ablative pulsed plasma thrusters, stationary plasma thrusters, pulsed magnetoplasmic thrusters, and arcjets. Evolutionary mission applications are presented for high specific impulse electric thruster systems. The current status of arcjet, ion, and magnetoplasmadynamic thrusters and their associated power processor technologies are summarized.

  16. Method of preparing high specific activity platinum-195m

    SciTech Connect

    Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.

    2004-06-15

    A method of preparing high-specific-activity .sup.195m Pt includes the steps of: exposing .sup.193 Ir to a flux of neutrons sufficient to convert a portion of the .sup.193 Ir to .sup.195m Pt to form an irradiated material; dissolving the irradiated material to form an intermediate solution comprising Ir and Pt; and separating the Pt from the Ir by cation exchange chromatography to produce .sup.195m Pt.

  17. Method for preparing high specific activity 177Lu

    SciTech Connect

    Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.

    2004-04-06

    A method of separating lutetium from a solution containing Lu and Yb, particularly reactor-produced .sup.177 Lu and .sup.177 Yb, includes the steps of: providing a chromatographic separation apparatus containing LN resin; loading the apparatus with a solution containing Lu and Yb; and eluting the apparatus to chromatographically separate the Lu and the Yb in order to produce high-specific-activity .sup.177 Yb.

  18. Solar-powered rocket engine optimization for high specific impulse

    NASA Astrophysics Data System (ADS)

    Pande, J. Bradley

    1993-11-01

    Hercules Aerospace is currently developing a solar-powered rocket engine (SPRE) design optimized for high specific impulse (Isp). The SPRE features a low loss geometry in its light-gathering cavity, which includes an integral secondary concentrator. The simple one-piece heat exchanger is made from refractory metal and/or ceramic open-celled foam. The foam's high surface-area-to-volume ratio will efficiently transfer the thermal energy to the hydrogen propellant. The single-pass flow of propellant through the heat exchanger further boosts thermal efficiency by regeneratively cooling surfaces near the entrance of the optical cavity. These surfaces would otherwise reradiate a significant portion of the captured solar energy back out of the solar entrance. Such design elements promote a high overall thermal efficiency and hence, a high operating Isp

  19. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks †

    PubMed Central

    Imran, Muhammad; Zafar, Nazir Ahmad

    2012-01-01

    Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  20. Fast two-dimensional super-resolution image reconstruction algorithm for ultra-high emitter density.

    PubMed

    Huang, Jiaqing; Gumpper, Kristyn; Chi, Yuejie; Sun, Mingzhai; Ma, Jianjie

    2015-07-01

    Single-molecule localization microscopy achieves sub-diffraction-limit resolution by localizing a sparse subset of stochastically activated emitters in each frame. Its temporal resolution is limited by the maximal emitter density that can be handled by the image reconstruction algorithms. Multiple algorithms have been developed to accurately locate the emitters even when they have significant overlaps. Currently, compressive-sensing-based algorithm (CSSTORM) achieves the highest emitter density. However, CSSTORM is extremely computationally expensive, which limits its practical application. Here, we develop a new algorithm (MempSTORM) based on two-dimensional spectrum analysis. With the same localization accuracy and recall rate, MempSTORM is 100 times faster than CSSTORM with ℓ(1)-homotopy. In addition, MempSTORM can be implemented on a GPU for parallelism, which can further increase its computational speed and make it possible for online super-resolution reconstruction of high-density emitters.

  1. High Quality Typhoon Cloud Image Restoration by Combining Genetic Algorithm with Contourlet Transform

    SciTech Connect

    Zhang Changjiang; Wang Xiaodong

    2008-11-06

    An efficient typhoon cloud image restoration algorithm is proposed. Having implemented contourlet transform to a typhoon cloud image, noise is reduced in the high sub-bands. Weight median value filter is used to reduce the noise in the contourlet domain. Inverse contourlet transform is done to obtain the de-noising image. In order to enhance the global contrast of the typhoon cloud image, in-complete Beta transform (IBT) is used to determine non-linear gray transform curve so as to enhance global contrast for the de-noising typhoon cloud image. Genetic algorithm is used to obtain the optimal gray transform curve. Information entropy is used as the fitness function of the genetic algorithm. Experimental results show that the new algorithm is able to well enhance the global for the typhoon cloud image while well reducing the noises in the typhoon cloud image.

  2. Trajectory optimization of spacecraft high-thrust orbit transfer using a modified evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shirazi, Abolfazl

    2016-10-01

    This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.

  3. Cooperative scheduling of imaging observation tasks for high-altitude airships based on propagation algorithm.

    PubMed

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  4. Cooperative Scheduling of Imaging Observation Tasks for High-Altitude Airships Based on Propagation Algorithm

    PubMed Central

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  5. Regional Urban Aerosol Retrieval With MODIS: High-Resolution Algorithm Application and Extension of Look-up Tables

    NASA Astrophysics Data System (ADS)

    Jerg, M. P.; Oo, M. M.; Gross, B. M.; Moshary, F.; Ahmed, S. A.

    2008-12-01

    Aerosols play an important role for the global climate by modulating the Earth's energy budget. Air quality and related health issues for humans are also tightly linked with concentration, composition, and size of aerosol particles. Satellite remote sensing with the MODIS sensor on NASA's Aqua and Terra platforms is one means to investigate aerosols globally. However, due to the global scope of the operational mission only globally based aerosol models can be employed in the look-up table approach of the retrieval algorithm. The relatively coarse resolution of 10x10km also largely prevents the detection of small scale structures in the aerosol optical depth (AOD) on a regional level. Consequently, the operational MODIS aerosol algorithm over land has been specifically adapted to the New York City area. First, the operational look-up table was extended based on local aerosol climatology obtained using five years of AERONET measurements at the City College of New York site. These models were then used to create appropriate LUT using the 6S radiative transfer model. Second, regional surface reflectance ratio parameterizations which better characterize the urban surface properties were implemented in the algorithm. These two modifications ultimately allow the retrieval algorithm to be applied at the actual sensor resolution of 500x500m. This presentation focuses on estimating the errors that are inherent in the operational processing compared to a regionally refined processing scheme. In particular, we remove artificial hot spots in the aerosol retrieval and are able to extract realistic high resolution aerosol structure.

  6. High-performance modeling acoustic and elastic waves using the parallel Dichotomy Algorithm

    SciTech Connect

    Fatyanov, Alexey G.; Terekhov, Andrew V.

    2011-03-01

    A high-performance parallel algorithm is proposed for modeling the propagation of acoustic and elastic waves in inhomogeneous media. An initial boundary-value problem is replaced by a series of boundary-value problems for a constant elliptic operator and different right-hand sides via the integral Laguerre transform. It is proposed to solve difference equations by the conjugate gradient method for acoustic equations and by the GMRES(k) method for modeling elastic waves. A preconditioning operator was the Laplace operator that is inverted using the variable separation method. The novelty of the proposed algorithm is using the Dichotomy Algorithm , which was designed for solving a series of tridiagonal systems of linear equations, in the context of the preconditioning operator inversion. Via considering analytical solutions, it is shown that modeling wave processes for long instants of time requires high-resolution meshes. The proposed parallel fine-mesh algorithm enabled to solve real application seismic problems in acceptable time and with high accuracy. By solving model problems, it is demonstrated that the considered parallel algorithm possesses high performance and efficiency over a wide range of the number of processors (from 2 to 8192).

  7. High efficiency cell-specific targeting of cytokine activity

    NASA Astrophysics Data System (ADS)

    Garcin, Geneviève; Paul, Franciane; Staufenbiel, Markus; Bordat, Yann; van der Heyden, José; Wilmes, Stephan; Cartron, Guillaume; Apparailly, Florence; de Koker, Stefaan; Piehler, Jacob; Tavernier, Jan; Uzé, Gilles

    2014-01-01

    Systemic toxicity currently prevents exploiting the huge potential of many cytokines for medical applications. Here we present a novel strategy to engineer immunocytokines with very high targeting efficacies. The method lies in the use of mutants of toxic cytokines that markedly reduce their receptor-binding affinities, and that are thus rendered essentially inactive. Upon fusion to nanobodies specifically binding to marker proteins, activity of these cytokines is selectively restored for cell populations expressing this marker. This ‘activity-by-targeting’ concept was validated for type I interferons and leptin. In the case of interferon, activity can be directed to target cells in vitro and to selected cell populations in mice, with up to 1,000-fold increased specific activity. This targeting strategy holds promise to revitalize the clinical potential of many cytokines.

  8. Cellulose antibody films for highly specific evanescent wave immunosensors

    NASA Astrophysics Data System (ADS)

    Hartmann, Andreas; Bock, Daniel; Jaworek, Thomas; Kaul, Sepp; Schulze, Matthais; Tebbe, H.; Wegner, Gerhard; Seeger, Stefan

    1996-01-01

    For the production of recognition elements for evanescent wave immunosensors optical waveguides have to be coated with ultrathin stable antibody films. In the present work non amphiphilic alkylated cellulose and copolyglutamate films are tested as monolayer matrices for the antibody immobilization using the Langmuir-Blodgett technique. These films are transferred onto optical waveguides and serve as excellent matrices for the immobilization of antibodies in high density and specificity. In addition to the multi-step immobilization of immunoglobulin G(IgG) on photochemically crosslinked and oxidized polymer films, the direct one-step transfer of mixed antibody-polymer films is performed. Both planar waveguides and optical fibers are suitable substrates for the immobilization. The activity and specificity of immobilized antibodies is controlled by the enzyme-linked immunosorbent assay (ELISA) technique. As a result reduced non-specific interactions between antigens and the substrate surface are observed if cinnamoylbutyether-cellulose is used as the film matrix for the antibody immobilization. Using the evanescent wave senor (EWS) technology immunosensor assays are performed in order to determine both the non-specific adsorption of different coated polymethylmethacrylat (PMMA) fibers and the long-term stability of the antibody films. Specificities of one-step transferred IgG-cellulose films are drastically enhanced compared to IgG-copolyglutamate films. Cellulose IgG films are used in enzymatic sandwich assays using mucine as a clinical relevant antigen that is recognized by the antibodies BM2 and BM7. A mucine calibration measurement is recorded. So far the observed detection limit for mucine is about 8 ng/ml.

  9. An automatic geo-spatial object recognition algorithm for high resolution satellite images

    NASA Astrophysics Data System (ADS)

    Ergul, Mustafa; Alatan, A. Aydın.

    2013-10-01

    This paper proposes a novel automatic geo-spatial object recognition algorithm for high resolution satellite imaging. The proposed algorithm consists of two main steps; a hypothesis generation step with a local feature-based algorithm and a verification step with a shape-based approach. In the hypothesis generation step, a set of hypothesis for possible object locations is generated, aiming lower missed detections and higher false-positives by using a Bag of Visual Words type approach. In the verification step, the foreground objects are first extracted by a semi-supervised image segmentation algorithm, utilizing detection results from the previous step, and then, the shape descriptors for segmented objects are utilized to prune out the false positives. Based on simulation results, it can be argued that the proposed algorithm achieves both high precision and high recall rates as a result of taking advantage of both the local feature-based and the shape-based object detection approaches. The superiority of the proposed method is due to the ability of minimization of false alarm rate and since most of the object shapes contain more characteristic and discriminative information about their identity and functionality.

  10. Automatic, Real-Time Algorithms for Anomaly Detection in High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Srivastava, A. N.; Nemani, R. R.; Votava, P.

    2008-12-01

    Earth observing satellites are generating data at an unprecedented rate, surpassing almost all other data intensive applications. However, most of the data that arrives from the satellites is not analyzed directly. Rather, multiple scientific teams analyze only a small fraction of the total data available in the data stream. Although there are many reasons for this situation one paramount concern is developing algorithms and methods that can analyze the vast, high dimensional, streaming satellite images. This paper describes a new set of methods that are among the fastest available algorithms for real-time anomaly detection. These algorithms were built to maximize accuracy and speed for a variety of applications in fields outside of the earth sciences. However, our studies indicate that with appropriate modifications, these algorithms can be extremely valuable for identifying anomalies rapidly using only modest computational power. We review two algorithms which are used as benchmarks in the field: Orca, One-Class Support Vector Machines and discuss the anomalies that are discovered in MODIS data taken over the Central California region. We are especially interested in automatic identification of disturbances within the ecosystems (e,g, wildfires, droughts, floods, insect/pest damage, wind damage, logging). We show the scalability of the algorithms and demonstrate that with appropriately adapted technology, the dream of real-time analysis can be made a reality.

  11. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  12. A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas

    SciTech Connect

    Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q

    2007-04-18

    A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.

  13. Efficiency Analysis of a High-Specific Impulse Hall Thruster

    NASA Technical Reports Server (NTRS)

    Jacobson, David (Technical Monitor); Hofer, Richard R.; Gallimore, Alec D.

    2004-01-01

    Performance and plasma measurements of the high-specific impulse NASA-173Mv2 Hall thruster were analyzed using a phenomenological performance model that accounts for a partially-ionized plasma containing multiply-charged ions. Between discharge voltages of 300 to 900 V, the results showed that although the net decrease of efficiency due to multiply-charged ions was only 1.5 to 3.0 percent, the effects of multiply-charged ions on the ion and electron currents could not be neglected. Between 300 to 900 V, the increase of the discharge current was attributed to the increasing fraction of multiply-charged ions, while the maximum deviation of the electron current from its average value was only +5/-14 percent. These findings revealed how efficient operation at high-specific impulse was enabled through the regulation of the electron current with the applied magnetic field. Between 300 to 900 V, the voltage utilization ranged from 89 to 97 percent, the mass utilization from 86 to 90 percent, and the current utilization from 77 to 81 percent. Therefore, the anode efficiency was largely determined by the current utilization. The electron Hall parameter was nearly constant with voltage, decreasing from an average of 210 at 300 V to an average of 160 between 400 to 900 V. These results confirmed our claim that efficient operation can be achieved only over a limited range of Hall parameters.

  14. A fast and high performance multiple data integration algorithm for identifying human disease genes

    PubMed Central

    2015-01-01

    Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620

  15. A new algorithm for generating highly accurate benchmark solutions to transport test problems

    SciTech Connect

    Azmy, Y.Y.

    1997-06-01

    We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.

  16. A reconstruction algorithm based on sparse representation for Raman signal processing under high background noise

    NASA Astrophysics Data System (ADS)

    Fan, X.; Wang, X.; Wang, X.; Xu, Y.; Que, J.; He, H.; Wang, X.; Tang, M.

    2016-02-01

    Background noise is one of the main interference sources of the Raman spectroscopy measurement and imaging technique. In this paper, a sparse representation based algorithm is presented to process the Raman signals under high background noise. In contrast with the existing de-noising methods, the proposed method reconstructs the pure Raman signals by estimating the Raman peak information. The advantage of the proposed algorithm is its high anti-noise capacity and low pure Raman signal reduction contributed by its reconstruction principle. Meanwhile, the Batch-OMP algorithm is applied to accelerate the training of the sparse representation. Therefore, it is very suitable to be adopted in the Raman measurement or imaging instruments to observe fast dynamic processes where the scanning time has to be shortened and the signal-to-noise ratio (SNR) of the raw tested signal is reduced. In the simulation and experiment, the de-noising result obtained by the proposed algorithm was better than the traditional Savitzky-Golay (S-G) filter and the fixed-threshold wavelet de-noising algorithm.

  17. MS Amanda, a Universal Identification Algorithm Optimized for High Accuracy Tandem Mass Spectra

    PubMed Central

    2014-01-01

    Today’s highly accurate spectra provided by modern tandem mass spectrometers offer considerable advantages for the analysis of proteomic samples of increased complexity. Among other factors, the quantity of reliably identified peptides is considerably influenced by the peptide identification algorithm. While most widely used search engines were developed when high-resolution mass spectrometry data were not readily available for fragment ion masses, we have designed a scoring algorithm particularly suitable for high mass accuracy. Our algorithm, MS Amanda, is generally applicable to HCD, ETD, and CID fragmentation type data. The algorithm confidently explains more spectra at the same false discovery rate than Mascot or SEQUEST on examined high mass accuracy data sets, with excellent overlap and identical peptide sequence identification for most spectra also explained by Mascot or SEQUEST. MS Amanda, available at http://ms.imp.ac.at/?goto=msamanda, is provided free of charge both as standalone version for integration into custom workflows and as a plugin for the Proteome Discoverer platform. PMID:24909410

  18. Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori

    2014-01-01

    Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…

  19. A quasi-Newton acceleration for high-dimensional optimization algorithms.

    PubMed

    Zhou, Hua; Alexander, David; Lange, Kenneth

    2011-01-01

    In many statistical problems, maximum likelihood estimation by an EM or MM algorithm suffers from excruciatingly slow convergence. This tendency limits the application of these algorithms to modern high-dimensional problems in data mining, genomics, and imaging. Unfortunately, most existing acceleration techniques are ill-suited to complicated models involving large numbers of parameters. The squared iterative methods (SQUAREM) recently proposed by Varadhan and Roland constitute one notable exception. This paper presents a new quasi-Newton acceleration scheme that requires only modest increments in computation per iteration and overall storage and rivals or surpasses the performance of SQUAREM on several representative test problems.

  20. A Very-High-Specific-Impulse Relativistic Laser Thruster

    SciTech Connect

    Horisawa, Hideyuki; Kimura, Itsuro

    2008-04-28

    Characteristics of compact laser plasma accelerators utilizing high-power laser and thin-target interaction were reviewed as a potential candidate of future spacecraft thrusters capable of generating relativistic plasma beams for interstellar missions. Based on the special theory of relativity, motion of the relativistic plasma beam exhausted from the thruster was formulated. Relationships of thrust, specific impulse, input power and momentum coupling coefficient for the relativistic plasma thruster were derived. It was shown that under relativistic conditions, the thrust could be extremely large even with a small amount of propellant flow rate. Moreover, it was shown that for a given value of input power thrust tended to approach the value of the photon rocket under the relativistic conditions regardless of the propellant flow rate.

  1. Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Freels, J. D.

    1989-01-01

    A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.

  2. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms

    DOE PAGES

    Wang, C. L.

    2016-05-17

    On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methodswere proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover,more » these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less

  3. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  4. Solution of the hydrodynamic device model using high-order non-oscillatory shock capturing algorithms

    NASA Technical Reports Server (NTRS)

    Fatemi, Emad; Jerome, Joseph; Osher, Stanley

    1989-01-01

    A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.

  5. A high fuel consumption efficiency management scheme for PHEVs using an adaptive genetic algorithm.

    PubMed

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day.

  6. An infrared small target detection algorithm based on high-speed local contrast method

    NASA Astrophysics Data System (ADS)

    Cui, Zheng; Yang, Jingli; Jiang, Shouda; Li, Junbao

    2016-05-01

    Small-target detection in infrared imagery with a complex background is always an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate, and speed. However, current algorithms usually improve one or two of the detection capabilities while sacrificing the other. In this letter, an Infrared (IR) small target detection algorithm with two layers inspired by Human Visual System (HVS) is proposed to balance those detection capabilities. The first layer uses high speed simplified local contrast method to select significant information. And the second layer uses machine learning classifier to separate targets from background clutters. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.

  7. Representation of high frequency Space Shuttle data by ARMA algorithms and random response spectra

    NASA Technical Reports Server (NTRS)

    Spanos, P. D.; Mushung, L. J.

    1990-01-01

    High frequency Space Shuttle lift-off data are treated by autoregressive (AR) and autoregressive-moving-average (ARMA) digital algorithms. These algorithms provide useful information on the spectral densities of the data. Further, they yield spectral models which lend themselves to incorporation to the concept of the random response spectrum. This concept yields a reasonably smooth power spectrum for the design of structural and mechanical systems when the available data bank is limited. Due to the non-stationarity of the lift-off event, the pertinent data are split into three slices. Each of the slices is associated with a rather distinguishable phase of the lift-off event, where stationarity can be expected. The presented results are rather preliminary in nature; it is aimed to call attention to the availability of the discussed digital algorithms and to the need to augment the Space Shuttle data bank as more flights are completed.

  8. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  9. A high throughput architecture for a low complexity soft-output demapping algorithm

    NASA Astrophysics Data System (ADS)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  10. Nanoporous ultra-high specific surface inorganic fibres

    NASA Astrophysics Data System (ADS)

    Kanehata, Masaki; Ding, Bin; Shiratori, Seimei

    2007-08-01

    Nanoporous inorganic (silica) nanofibres with ultra-high specific surface have been fabricated by electrospinning the blend solutions of poly(vinyl alcohol) (PVA) and colloidal silica nanoparticles, followed by selective removal of the PVA component. The configurations of the composite and inorganic nanofibres were investigated by changing the average silica particle diameters and the concentrations of colloidal silica particles in polymer solutions. After the removal of PVA by calcination, the fibre shape of pure silica particle assembly was maintained. The nanoporous silica fibres were assembled as a porous membrane with a high surface roughness. From the results of Brunauer-Emmett-Teller (BET) measurements, the BET surface area of inorganic silica nanofibrous membranes was increased with the decrease of the particle diameters. The membrane composed of silica particles with diameters of 15 nm showed the largest BET surface area of 270.3 m2 g-1 and total pore volume of 0.66 cm3 g-1. The physical absorption of methylene blue dye molecules by nanoporous silica membranes was examined using UV-vis spectrometry. Additionally, the porous silica membranes modified with fluoroalkylsilane showed super-hydrophobicity due to their porous structures.

  11. Plasmoid Thruster for High Specific-Impulse Propulsion

    NASA Technical Reports Server (NTRS)

    Fimognari, Peter; Eskridge, Richard; Martin, Adam; Lee, Michael

    2007-01-01

    A report discusses a new multi-turn, multi-lead design for the first generation PT-1 (Plasmoid Thruster) that produces thrust by expelling plasmas with embedded magnetic fields (plasmoids) at high velocities. This thruster is completely electrodeless, capable of using in-situ resources, and offers efficiencies as high as 70 percent at a specific impulse, I(sub sp), of up to 8,000 s. This unit consists of drive and bias coils wound around a ceramic form, and the capacitor bank and switches are an integral part of the assembly. Multiple thrusters may be gauged to inductively recapture unused energy to boost efficiency and to increase the repetition rate, which, in turn increases the average thrust of the system. The thruster assembly can use storable propellants such as H2O, ammonia, and NO, among others. Any available propellant gases can be used to produce an I(sub sp) in the range of 2,000 to 8,000 s with a single-stage thruster. These capabilities will allow the transport of greater payloads to outer planets, especially in the case of an I(sub sp) greater than 6,000 s.

  12. Behavior construction and refinement from high-level specifications

    NASA Astrophysics Data System (ADS)

    Martignoni, Andrew J., III; Smart, William D.

    2004-12-01

    Mobile robots are excellent examples of systems that need to show a high level of autonomy. Often robots are loosely supervised by humans who are not intimately familiar with the inner workings of the robot. We cannot generally predict exact environmental conditions in which the robot will operate in advance. This means that the behavior must be adapted in the field. Untrained individuals cannot (and probably should not) program the robot to effect these changes. We need a system that will (a) allow re-tasking, and (b) allow adaptation of the behavior to the specific conditions in the field. In this paper we concentrate on (b). We will describe how to assemble controllers, based on high-level descriptions of the behavior. We will show how the behavior can be tuned by the human, despite not knowing how the code is put together. We will also show how this can be done automatically, using reinforcement learning, and point out the problems that must be overcome for this approach to work.

  13. Highly efficient site-specific transgenesis in cancer cell lines

    PubMed Central

    2012-01-01

    Background Transgenes introduced into cancer cell lines serve as powerful tools for identification of genes involved in cancer. However, the random nature of genomic integration site of a transgene highly influences the fidelity, reliability and level of its expression. In order to alleviate this bottleneck, we characterized the potential utility of a novel PhiC31 integrase-mediated site-specific insertion system (PhiC31-IMSI) for introduction of transgenes into a pre-inserted docking site in the genome of cancer cells. Methods According to this system, a “docking-site” was first randomly inserted into human cancer cell lines and clones with a single copy were selected. Subsequently, an “incoming” vector containing the gene of interest was specifically inserted in the docking-site using PhiC31. Results Using the Pc-3 and SKOV-3 cancer cell lines, we showed that transgene insertion is reproducible and reliable. Furthermore, the selection system ensured that all surviving stable transgenic lines harbored the correct integration site. We demonstrated that the expression levels of reporter genes, such as green fluorescent protein and luciferase, from the same locus were comparable among sister, isogenic clones. Using in vivo xenograft studies, we showed that the genetically altered cancer cell lines retain the properties of the parental line. To achieve temporal control of transgene expression, we coupled our insertion strategy with the doxycycline inducible system and demonstrated tight regulation of the expression of the antiangiogenic molecule sFlt-1-Fc in Pc-3 cells. Furthermore, we introduced the luciferase gene into the insertion cassette allowing for possible live imaging of cancer cells in transplantation assays. We also generated a series of Gateway cloning-compatible intermediate cassettes ready for high-throughput cloning of transgenes and demonstrated that PhiC31-IMSI can be achieved in a high throughput 96-well plate format. Conclusions The novel

  14. Identification by ultrasound evaluation of the carotid and femoral arteries of high-risk subjects missed by three validated cardiovascular disease risk algorithms.

    PubMed

    Postley, John E; Luo, Yanting; Wong, Nathan D; Gardin, Julius M

    2015-11-15

    Atherosclerotic cardiovascular disease (ASCVD) events are the leading cause of death in the United States and globally. Traditional global risk algorithms may miss 50% of patients who experience ASCVD events. Noninvasive ultrasound evaluation of the carotid and femoral arteries can identify subjects at high risk for ASCVD events. We examined the ability of different global risk algorithms to identify subjects with femoral and/or carotid plaques found by ultrasound. The study population consisted of 1,464 asymptomatic adults (39.8% women) aged 23 to 87 years without previous evidence of ASCVD who had ultrasound evaluation of the carotid and femoral arteries. Three ASCVD risk algorithms (10-year Framingham Risk Score [FRS], 30-year FRS, and lifetime risk) were compared for the 939 subjects who met the algorithm age criteria. The frequency of femoral plaque as the only plaque was 18.3% in the total group and 14.8% in the risk algorithm groups (n = 939) without a significant difference between genders in frequency of femoral plaque as the only plaque. Those identified as high risk by the lifetime risk algorithm included the most men and women who had plaques either femoral or carotid (59% and 55%) but had lower specificity because the proportion of subjects who actually had plaques in the high-risk group was lower (50% and 35%) than in those at high risk defined by the FRS algorithms. In conclusion, ultrasound evaluation of the carotid and femoral arteries can identify subjects at risk of ASCVD events missed by traditional risk-predicting algorithms. The large proportion of subjects with femoral plaque only supports the use of including both femoral and carotid arteries in ultrasound evaluation.

  15. High-resolution climate data over conterminous US using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Hashimoto, H.; Nemani, R. R.; Wang, W.

    2014-12-01

    We developed a new methodology to create high-resolution precipitation data using the random forest algorithm. We have used two approaches: physical downscaling from GCM data using a regional climate model, and interpolation from ground observation data. Physical downscaling method can be applied only for a small region because it is computationally expensive and complex to deploy. On the other hand, interpolation schemes from ground observations do not consider physical processes. In this study, we utilized the random forest algorithm to integrate atmospheric reanalysis data, satellite data, topography data, and ground observation data. First we considered situations where precipitation is same across the domain, largely dominated by storm like systems. We then picked several points to train random forest algorithm. The random forest algorithm estimates out-of-bag errors spatially, and produces the relative importance of each of the input variable.This methodology has the following advantages. (1) The methodology can ingest any spatial dataset to improve downscaling. Even non-precipitation datasets can be ingested such as satellite cloud cover data, radar reflectivity image, or modeled convective available potential energy. (2) The methodology is purely statistical so that physical assumptions are not required. Meanwhile, most of interpolation schemes assume empirical relationship between precipitation and elevation for orographic precipitation. (3) Low quality value in ingested data does not cause critical bias in the results because of the ensemble feature of random forest. Therefore, users do not need to pay a special attention to quality control of input data compared to other interpolation methodologies. (4) Same methodology can be applied to produce other high-resolution climate datasets, such as wind and cloud cover. Those variables are usually hard to be interpolated by conventional algorithms. In conclusion, the proposed methodology can produce reasonable

  16. High voltage and high specific capacity dual intercalating electrode Li-ion batteries

    NASA Technical Reports Server (NTRS)

    West, William C. (Inventor); Blanco, Mario (Inventor)

    2010-01-01

    The present invention provides high capacity and high voltage Li-ion batteries that have a carbonaceous cathode and a nonaqueous electrolyte solution comprising LiF salt and an anion receptor that binds the fluoride ion. The batteries can comprise dual intercalating electrode Li ion batteries. Methods of the present invention use a cathode and electrode pair, wherein each of the electrodes reversibly intercalate ions provided by a LiF salt to make a high voltage and high specific capacity dual intercalating electrode Li-ion battery. The present methods and systems provide high-capacity batteries particularly useful in powering devices where minimizing battery mass is important.

  17. High Performance Organ-Specific Nuclear Medicine Imagers.

    NASA Astrophysics Data System (ADS)

    Majewski, Stan

    2006-04-01

    One of the exciting applications of nuclear science is nuclear medicine. Well-known diagnostic imaging tools such as PET and SPECT (as well as MRI) were developed as spin-offs of basic scientific research in atomic and nuclear physics. Development of modern instrumentation for applications in particle physics experiments offers an opportunity to contribute to development of improved nuclear medicine (gamma and positron) imagers, complementing the present set of standard imaging tools (PET, SPECT, MRI, ultrasound, fMRI, MEG, etc). Several examples of new high performance imagers developed in national laboratories in collaboration with academia will be given to demonstrate this spin-off activity. These imagers are designed to specifically image organs such as breast, heart, head (brain), or prostate. The remaining and potentially most important challenging application field for dedicated nuclear medicine imagers is to assist with cancer radiation treatments. Better control of radiation dose delivery requires development of new compact in-situ imagers becoming integral parts of the radiation delivery systems using either external beams or based on radiation delivery by inserting or injecting radioactive sources (gamma, beta or alpha emitters) into tumors.

  18. Streptococcal C5a peptidase is a highly specific endopeptidase.

    PubMed Central

    Cleary, P P; Prahbu, U; Dale, J B; Wexler, D E; Handley, J

    1992-01-01

    Compositional analysis of streptococcal C5a peptidase (SCPA) cleavage products from a synthetic peptide corresponding to the 20 C-terminal residues of C5a demonstrated that the target cleavage site is His-Lys rather than Lys-Asp, as previously suggested. A C5a peptide analog with Lys replaced by Gln was also subject to cleavage by SCPA. This confirmed that His-Lys rather than Lys-Asp is the scissile bond. Cleavage at histidine is unusual but is the same as that suggested for a peptidase produced by group B streptococci. Native C5 protein was also resistant to SCPA, suggesting that the His-Lys bond is inaccessible prior to proteolytic cleavage by C5 convertase. These experiments showed that the streptococcal C5a peptidase is highly specific for C5a and suggest that its function is not merely to process protein for metabolic consumption but to act primarily to eliminate this chemotactic signal from inflammatory foci. Images PMID:1452354

  19. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    NASA Astrophysics Data System (ADS)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.

  20. A novel robust and efficient algorithm for charge particle tracking in high background flux

    NASA Astrophysics Data System (ADS)

    Fanelli, C.; Cisbani, E.; Del Dotto, A.

    2015-05-01

    The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 1039cm-2s-1. To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results.

  1. Low-complexity, high-speed, and high-dynamic range time-to-impact algorithm

    NASA Astrophysics Data System (ADS)

    Åström, Anders; Forchheimer, Robert

    2012-10-01

    We present a method suitable for a time-to-impact sensor. Inspired by the seemingly "low" complexity of small insects, we propose a new approach to optical flow estimation that is the key component in time-to-impact estimation. The approach is based on measuring time instead of the apparent motion of points in the image plane. The specific properties of the motion field in the time-to-impact application are used, such as measuring only along a one-dimensional (1-D) line and using simple feature points, which are tracked from frame to frame. The method lends itself readily to be implemented in a parallel processor with an analog front-end. Such a processing concept [near-sensor image processing (NSIP)] was described for the first time in 1983. In this device, an optical sensor array and a low-level processing unit are tightly integrated into a hybrid analog-digital device. The high dynamic range, which is a key feature of NSIP, is used to extract the feature points. The output from the device consists of a few parameters, which will give the time-to-impact as well as possible transversal speed for off-centered viewing. Performance and complexity aspects of the implementation are discussed, indicating that time-to-impact data can be achieved at a rate of 10 kHz with today's technology.

  2. A Jitter-Mitigating High Gain Antenna Pointing Algorithm for the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia; Blaurock, Carl

    2007-01-01

    This paper details a High Gain Antenna (HGA) pointing algorithm which mitigates jitter during the motion of the antennas on the Solar Dynamics Observatory (SDO) spacecraft. SDO has two HGAs which point towards the Earth and send data to a ground station at a high rate. These antennas are required to track the ground station during the spacecraft Inertial and Science modes, which include periods of inertial Sunpointing as well as calibration slews. The HGAs also experience handoff seasons, where the antennas trade off between pointing at the ground station and pointing away from the Earth. The science instruments on SDO require fine Sun pointing and have a very low jitter tolerance. Analysis showed that the nominal tracking and slewing motions of the antennas cause enough jitter to exceed the HGA portion of the jitter budget. The HGA pointing control algorithm was expanded from its original form as a means to mitigate the jitter.

  3. Supercomputer implementation of finite element algorithms for high speed compressible flows

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Ramakrishnan, R.

    1986-01-01

    Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes.

  4. Algorithm for Automatic Behavior Quantification of Laboratory Mice Using High-Frame-Rate Videos

    NASA Astrophysics Data System (ADS)

    Nie, Yuman; Takaki, Takeshi; Ishii, Idaku; Matsuda, Hiroshi

    In this paper, we propose an algorithm for automatic behavior quantification in laboratory mice to quantify several model behaviors. The algorithm can detect repetitive motions of the fore- or hind-limbs at several or dozens of hertz, which are too rapid for the naked eye, from high-frame-rate video images. Multiple repetitive motions can always be identified from periodic frame-differential image features in four segmented regions — the head, left side, right side, and tail. Even when a mouse changes its posture and orientation relative to the camera, these features can still be extracted from the shift- and orientation-invariant shape of the mouse silhouette by using the polar coordinate system and adjusting the angle coordinate according to the head and tail positions. The effectiveness of the algorithm is evaluated by analyzing long-term 240-fps videos of four laboratory mice for six typical model behaviors: moving, rearing, immobility, head grooming, left-side scratching, and right-side scratching. The time durations for the model behaviors determined by the algorithm have detection/correction ratios greater than 80% for all the model behaviors. This shows good quantification results for actual animal testing.

  5. Connected Component Labeling algorithm for very complex and high-resolution images on an FPGA platform

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Huber, Felix

    2015-10-01

    Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded

  6. An elliptic phase-shift algorithm for high speed three-dimensional profilometry

    NASA Astrophysics Data System (ADS)

    Deng, Fuqin; Li, Zhao; Chen, Jia; Deng, Jiangwen; Fung, Kenneth S. M.; Lam, Edmund Y.

    2013-03-01

    A high throughput is often required in many machine vision systems, especially on the assembly line in the semiconductor industry. To develop a non-contact three-dimensional dense surface reconstruction system for real-time surface inspection and metrology applications, in this work, we project sinusoidal patterns onto the inspected objects and propose a high speed phase-shift algorithm. First, we use an illumination-reflectivity-focus (IRF) model to investigate the factors in image formation for phase-measuring profilometry. Second, by visualizing and analyzing the characteristic intensity locus projected onto the intensity space, we build a two-dimensional phase map to store the phase information for each point in the intensity space. Third, we develop an efficient elliptic phase-shift algorithm (E-PSA) for high speed surface profilometry. In this method, instead of calculating the time-consuming inverse trigonometric function, we only need to normalize the measured image intensities and then index the built two-dimensional phase map during real-time phase reconstruction. Finally, experimental results show that it is about two times faster than conventional phase-shift algorithm.

  7. MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Hao; Li, Na; Xu, Shiyou; Chen, Zengping

    2014-10-01

    Migration through resolution cells (MTRC) is generated in high-resolution inverse synthetic aperture radar (ISAR) imaging. A MTRC compensation algorithm for high-resolution ISAR imaging based on improved polar format algorithm (PFA) is proposed in this paper. Firstly, in the situation that a rigid-body target stably flies, the initial value of the rotation angle and center of the target is obtained from the rotation of radar line of sight (RLOS) and high range resolution profile (HRRP). Then, the PFA is iteratively applied to the echo data to search the optimization solution based on minimum entropy criterion. The procedure starts with the estimated initial rotation angle and center, and terminated when the entropy of the compensated ISAR image is minimized. To reduce the computational load, the 2-D iterative search is divided into two 1-D search. One is carried along the rotation angle and the other one is carried along rotation center. Each of the 1-D searches is realized by using of the golden section search method. The accurate rotation angle and center can be obtained when the iterative search terminates. Finally, apply the PFA to compensate the MTRC by the use of the obtained optimized rotation angle and center. After MTRC compensation, the ISAR image can be best focused. Simulated and real data demonstrate the effectiveness and robustness of the proposed algorithm.

  8. Trajectory Specification for High-Capacity Air Traffic Control

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.

    2004-01-01

    In the current air traffic management system, the fundamental limitation on airspace capacity is the cognitive ability of human air traffic controllers to maintain safe separation with high reliability. The doubling or tripling of airspace capacity that will be needed over the next couple of decades will require that tactical separation be at least partially automated. Standardized conflict-free four-dimensional trajectory assignment will be needed to accomplish that objective. A trajectory specification format based on the Extensible Markup Language is proposed for that purpose. This format can be used to downlink a trajectory request, which can then be checked on the ground for conflicts and approved or modified, if necessary, then uplinked as the assigned trajectory. The horizontal path is specified as a series of geodetic waypoints connected by great circles, and the great-circle segments are connected by turns of specified radius. Vertical profiles for climb and descent are specified as low-order polynomial functions of along-track position, which is itself specified as a function of time. Flight technical error tolerances in the along-track, cross-track, and vertical axes define a bounding space around the reference trajectory, and conformance will guarantee the required separation for a period of time known as the conflict time horizon. An important safety benefit of this regimen is that the traffic will be able to fly free of conflicts for at least several minutes even if all ground systems and the entire communication infrastructure fail. Periodic updates in the along-track axis will adjust for errors in the predicted along-track winds.

  9. A High Precision Position Sensor Design and Its Signal Processing Algorithm for a Maglev Train

    PubMed Central

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582

  10. A high precision position sensor design and its signal processing algorithm for a maglev train.

    PubMed

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.

  11. Genetic algorithm-support vector regression for high reliability SHM system based on FBG sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, XiaoLi; Liang, DaKai; Zeng, Jie; Asundi, Anand

    2012-02-01

    Structural Health Monitoring (SHM) based on fiber Bragg grating (FBG) sensor network has attracted considerable attention in recent years. However, FBG sensor network is embedded or glued in the structure simply with series or parallel. In this case, if optic fiber sensors or fiber nodes fail, the fiber sensors cannot be sensed behind the failure point. Therefore, for improving the survivability of the FBG-based sensor system in the SHM, it is necessary to build high reliability FBG sensor network for the SHM engineering application. In this study, a model reconstruction soft computing recognition algorithm based on genetic algorithm-support vector regression (GA-SVR) is proposed to achieve the reliability of the FBG-based sensor system. Furthermore, an 8-point FBG sensor system is experimented in an aircraft wing box. The external loading damage position prediction is an important subject for SHM system; as an example, different failure modes are selected to demonstrate the SHM system's survivability of the FBG-based sensor network. Simultaneously, the results are compared with the non-reconstruct model based on GA-SVR in each failure mode. Results show that the proposed model reconstruction algorithm based on GA-SVR can still keep the predicting precision when partial sensors failure in the SHM system; thus a highly reliable sensor network for the SHM system is facilitated without introducing extra component and noise.

  12. Fast intra-prediction algorithms for high efficiency video coding standard

    NASA Astrophysics Data System (ADS)

    Kibeya, Hassan; Belghith, Fatma; Ben Ayed, Mohammed Ali; Masmoudi, Nouri

    2016-01-01

    High efficiency video coding (HEVC) is the latest video compression standard that provides significant performance improvement on the compression ratio compared to all existing video coding standards. The intra-prediction procedure plays an important role in the HEVC encoder, and it is being achieved by providing up to 35 intra-modes with a larger coding unit requiring a high computational complexity that needs to be alleviated. Toward this end, the paper proposes two fast intra-mode decision algorithms that exploit the features of video sequences. First, an early detection of zero transform and quantified coefficients method is applied to generate threshold values employed for early termination of the intra-decision process and hence accelerates the encoding procedure. Another fast intra-mode decision algorithm is elaborated that relies on a refinement technique. Based on statistical analyses of frequently chosen modes, only a small part of the candidate modes is chosen for intra-prediction process, which reduces the complexity of the intra-encoding procedure. The performance of the proposed algorithms is verified through comparative analysis of encoding time, visual image quality, and compression ratio. Compared to HM 10.0, the encoding time reduction can reach 69% with only a slight degradation of image quality and compression ratio.

  13. A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen

    2016-04-01

    Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.

  14. MTF compensation algorithm based on blind deconvolution for high-resolution remote sensing satellite

    NASA Astrophysics Data System (ADS)

    Lee, Jihye; Chun, Joohwan; Lee, Donghwan

    2012-05-01

    In high resolution remote sensing satellite imaging system, image restoration is an important step to visualize ne details and mitigate the noise. The raw image data often presents poor imaging quality due to various reasons and Point Spread Function (PSF) measures such blurriness characteristic of the image using point source. Satellite image from Korea Multi-purpose Satellite 2 (KOMPSAT-2) also requires Modular Transfer Function (MTF) compensation process to achieve more realistic image which entails removing ringing artifacts at the edges and restraining excess use of denoising eect in order to keep it more realistic. This paper focuses on the deconvolution of KOMPSAT-2 image utilizing PSF attained from Korea Aerospace Research Institute compared to deconvolution with the estimated PSF blur kernel. The deconvolution algorithm considered are Richard-Lucy, Damped Richard-Lucy, Bilateral Richard-Lucy and Sparse Prior deconvolution algorithms.

  15. Optimizing spherical light-emitting diode array for highly uniform illumination distribution by employing genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Yanxia; Ji, Zhicheng; Su, Zhouping

    2013-01-01

    A numerical optimization method (genetic algorithm) is employed to design the spherical light-emitting diode (LED) array for highly uniform illumination distribution. An evaluation function related to the nonuniformity is constructed for the numerical optimization. With the minimum of evaluation function, the LED array produces the best uniformity. The genetic algorithm is used to seek the minimum of evaluation function. By this method, we design two LED arrays. In one case, LEDs are positioned symmetrically on the sphere and the illuminated target surface is a plane. However, in the other case, LEDs are positioned nonsymmetrically with a spherical target surface. Both the symmetrical and nonsymmetrical spherical LED arrays generate good uniform illumination distribution with calculated nonuniformities of 6 and 8%, respectively.

  16. A High-Order Statistical Tensor Based Algorithm for Anomaly Detection in Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao

    2014-11-01

    Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm.

  17. A high-order statistical tensor based algorithm for anomaly detection in hyperspectral imagery.

    PubMed

    Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao

    2014-01-01

    Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm. PMID:25366706

  18. High-resolution algorithms for the Navier-Stokes equations for generalized discretizations

    NASA Astrophysics Data System (ADS)

    Mitchell, Curtis Randall

    Accurate finite volume solution algorithms for the two dimensional Navier Stokes equations and the three dimensional Euler equations for both structured and unstructured grid topologies are presented. Results for two dimensional quadrilateral and triangular elements and three dimensional tetrahedral elements will be provided. Fundamental to the solution algorithm is a technique for generating multidimensional polynomials which model the spatial variation of the flow variables. Cell averaged data is used to reconstruct pointwise distributions of the dependent variables. The reconstruction errors are evaluated on triangular meshes. The implementation of the algorithm is unique in that three reconstructions are performed for each cell face in the domain. Two of the reconstructions are used to evaluate the inviscid fluxes and correspond to the right and left interface states needed for the solution of a Riemann problem. The third reconstruction is used to evaluate the viscous fluxes. The gradient terms that appear in the viscous fluxes are formed by simply differentiating the polynomial. By selecting the appropriate cell control volumes, centered, upwind and upwind-biased stencils are possible. Numerical calculations in two dimensions include solutions to elliptic boundary value problems, Ringlebs' flow, an inviscid shock reflection, a flat plate boundary layer, and a shock induced separation over a flat plate. Three dimensional results include the ONERA M6 wing. All of the unstructured grids were generated using an advancing front mesh generation procedure. Modifications to the three dimensional grid generator were necessary to discretize the surface grids for bodies with high curvature. In addition, mesh refinement algorithms were implemented to improve the surface grid integrity. Examples include a Glasair fuselage, High Speed Civil Transport, and the ONERA M6 wing. The role of reconstruction as applied to adaptive remeshing is discussed and a new first order error

  19. Development and Characterization of High-Efficiency, High-Specific Impulse Xenon Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Hofer, Richard R.; Jacobson, David (Technical Monitor)

    2004-01-01

    This dissertation presents research aimed at extending the efficient operation of 1600 s specific impulse Hall thruster technology to the 2000 to 3000 s range. Motivated by previous industry efforts and mission studies, the aim of this research was to develop and characterize xenon Hall thrusters capable of both high-specific impulse and high-efficiency operation. During the development phase, the laboratory-model NASA 173M Hall thrusters were designed and their performance and plasma characteristics were evaluated. Experiments with the NASA-173M version 1 (v1) validated the plasma lens magnetic field design. Experiments with the NASA 173M version 2 (v2) showed there was a minimum current density and optimum magnetic field topography at which efficiency monotonically increased with voltage. Comparison of the thrusters showed that efficiency can be optimized for specific impulse by varying the plasma lens. During the characterization phase, additional plasma properties of the NASA 173Mv2 were measured and a performance model was derived. Results from the model and experimental data showed how efficient operation at high-specific impulse was enabled through regulation of the electron current with the magnetic field. The electron Hall parameter was approximately constant with voltage, which confirmed efficient operation can be realized only over a limited range of Hall parameters.

  20. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  1. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    SciTech Connect

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  2. A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations

    NASA Astrophysics Data System (ADS)

    Jayaram, V.; Crain, K.; Keller, G. R.

    2011-12-01

    We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element

  3. Specification of High Activity Gamma-Ray Sources.

    ERIC Educational Resources Information Center

    International Commission on Radiation Units and Measurements, Washington, DC.

    The report is concerned with making recommendations for the specifications of gamma ray sources, which relate to the quantity of radioactive material and the radiation emitted. Primary consideration is given to sources in teletherapy and to a lesser extent those used in industrial radiography and in irradiation units used in industry and research.…

  4. System Specification for Immobilized High-Level Waste Interim Storage

    SciTech Connect

    CALMUS, R.B.

    2000-12-27

    This specification establishes the system-level functional, performance, design, interface, and test requirements for Phase 1 of the IHLW Interim Storage System, located at the Hanford Site in Washington State. The IHLW canisters will be produced at the Hanford Site by a Selected DOE contractor. Subsequent to storage the canisters will be shipped to a federal geologic repository.

  5. High concordance of gene expression profiling-correlated immunohistochemistry algorithms in diffuse large B-cell lymphoma, not otherwise specified.

    PubMed

    Hwang, Hee Sang; Park, Chan-Sik; Yoon, Dok Hyun; Suh, Cheolwon; Huh, Jooryung

    2014-08-01

    Diffuse large B-cell lymphoma (DLBCL) is classified into prognostically distinct germinal center B-cell (GCB) and activated B-cell subtypes by gene expression profiling (GEP). Recent reports suggest the role of GEP subtypes in targeted therapy. Immunohistochemistry (IHC) algorithms have been proposed as surrogates of GEP, but their utility remains controversial. Using microarray, we examined the concordance of 4 GEP-correlated and 2 non-GEP-correlated IHC algorithms in 381 DLBCLs, not otherwise specified. Subtypes and variants of DLBCL were excluded to minimize the possible confounding effect on prognosis and phenotype. Survival was analyzed in 138 cyclophosphamide, adriamycin, vincristine, and prednisone (CHOP)-treated and 147 rituximab plus CHOP (R-CHOP)-treated patients. Of the GEP-correlated algorithms, high concordance was observed among Hans, Choi, and Visco-Young algorithms (total concordance, 87.1%; κ score: 0.726 to 0.889), whereas Tally algorithm exhibited slightly lower concordance (total concordance 77.4%; κ score: 0.502 to 0.643). Two non-GEP-correlated algorithms (Muris and Nyman) exhibited poor concordance. Compared with the Western data, incidence of the non-GCB subtype was higher in all algorithms. Univariate analysis showed prognostic significance for Hans, Choi, and Visco-Young algorithms and BCL6, GCET1, LMO2, and BCL2 in CHOP-treated patients. On multivariate analysis, Hans algorithm retained its prognostic significance. By contrast, neither the algorithms nor individual antigens predicted survival in R-CHOP treatment. The high concordance among GEP-correlated algorithms suggests their usefulness as reliable discriminators of molecular subtype in DLBCL, not otherwise specified. Our study also indicates that prognostic significance of IHC algorithms may be limited in R-CHOP-treated Asian patients because of the predominance of the non-GCB type. PMID:24705314

  6. High concordance of gene expression profiling-correlated immunohistochemistry algorithms in diffuse large B-cell lymphoma, not otherwise specified.

    PubMed

    Hwang, Hee Sang; Park, Chan-Sik; Yoon, Dok Hyun; Suh, Cheolwon; Huh, Jooryung

    2014-08-01

    Diffuse large B-cell lymphoma (DLBCL) is classified into prognostically distinct germinal center B-cell (GCB) and activated B-cell subtypes by gene expression profiling (GEP). Recent reports suggest the role of GEP subtypes in targeted therapy. Immunohistochemistry (IHC) algorithms have been proposed as surrogates of GEP, but their utility remains controversial. Using microarray, we examined the concordance of 4 GEP-correlated and 2 non-GEP-correlated IHC algorithms in 381 DLBCLs, not otherwise specified. Subtypes and variants of DLBCL were excluded to minimize the possible confounding effect on prognosis and phenotype. Survival was analyzed in 138 cyclophosphamide, adriamycin, vincristine, and prednisone (CHOP)-treated and 147 rituximab plus CHOP (R-CHOP)-treated patients. Of the GEP-correlated algorithms, high concordance was observed among Hans, Choi, and Visco-Young algorithms (total concordance, 87.1%; κ score: 0.726 to 0.889), whereas Tally algorithm exhibited slightly lower concordance (total concordance 77.4%; κ score: 0.502 to 0.643). Two non-GEP-correlated algorithms (Muris and Nyman) exhibited poor concordance. Compared with the Western data, incidence of the non-GCB subtype was higher in all algorithms. Univariate analysis showed prognostic significance for Hans, Choi, and Visco-Young algorithms and BCL6, GCET1, LMO2, and BCL2 in CHOP-treated patients. On multivariate analysis, Hans algorithm retained its prognostic significance. By contrast, neither the algorithms nor individual antigens predicted survival in R-CHOP treatment. The high concordance among GEP-correlated algorithms suggests their usefulness as reliable discriminators of molecular subtype in DLBCL, not otherwise specified. Our study also indicates that prognostic significance of IHC algorithms may be limited in R-CHOP-treated Asian patients because of the predominance of the non-GCB type.

  7. Speeding-up Bioinformatics Algorithms with Heterogeneous Architectures: Highly Heterogeneous Smith-Waterman (HHeterSW).

    PubMed

    Gálvez, Sergio; Ferusic, Adis; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan A; Dorado, Gabriel

    2016-10-01

    The Smith-Waterman algorithm has a great sensitivity when used for biological sequence-database searches, but at the expense of high computing-power requirements. To overcome this problem, there are implementations in literature that exploit the different hardware-architectures available in a standard PC, such as GPU, CPU, and coprocessors. We introduce an application that splits the original database-search problem into smaller parts, resolves each of them by executing the most efficient implementations of the Smith-Waterman algorithms in different hardware architectures, and finally unifies the generated results. Using non-overlapping hardware allows simultaneous execution, and up to 2.58-fold performance gain, when compared with any other algorithm to search sequence databases. Even the performance of the popular BLAST heuristic is exceeded in 78% of the tests. The application has been tested with standard hardware: Intel i7-4820K CPU, Intel Xeon Phi 31S1P coprocessors, and nVidia GeForce GTX 960 graphics cards. An important increase in performance has been obtained in a wide range of situations, effectively exploiting the available hardware.

  8. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.

    PubMed

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis.

  9. Differential evolution algorithm for nonlinear inversion of high-frequency Rayleigh wave dispersion curves

    NASA Astrophysics Data System (ADS)

    Song, Xianhai; Li, Lei; Zhang, Xueqiang; Huang, Jianquan; Shi, Xinchun; Jin, Si; Bai, Yiming

    2014-10-01

    In recent years, Rayleigh waves are gaining popularity to obtain near-surface shear (S)-wave velocity profiles. However, inversion of Rayleigh wave dispersion curves is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this study, we proposed and tested a new Rayleigh wave dispersion curve inversion scheme based on differential evolution (DE) algorithm. DE is a novel stochastic search approach that possesses several attractive advantages: (1) Capable of handling non-differentiable, non-linear and multimodal objective functions because of its stochastic search strategy; (2) Parallelizability to cope with computation intensive objective functions without being time consuming by using a vector population where the stochastic perturbation of the population vectors can be done independently; (3) Ease of use, i.e. few control variables to steer the minimization/maximization by DE's self-organizing scheme; and (4) Good convergence properties. The proposed inverse procedure was applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and stability of DE, we firstly inverted four noise-free and four noisy synthetic data sets. Secondly, we investigated effects of the number of layers on DE algorithm and made an uncertainty appraisal analysis by DE algorithm. Thirdly, we made a comparative analysis with genetic algorithms (GA) by a synthetic data set to further investigate the performance of the proposed inverse procedure. Finally, we inverted a real-world example from a waste disposal site in NE Italy to examine the applicability of DE on Rayleigh wave dispersion curves. Furthermore, we compared the performance of the proposed approach to that of GA to further evaluate scores of the inverse procedure described here. Results from both synthetic and actual field data demonstrate that differential evolution algorithm applied

  10. Phase-distortion correction based on stochastic parallel proportional-integral-derivative algorithm for high-resolution adaptive optics

    NASA Astrophysics Data System (ADS)

    Sun, Yang; Wu, Ke-nan; Gao, Hong; Jin, Yu-qi

    2015-02-01

    A novel optimization method, stochastic parallel proportional-integral-derivative (SPPID) algorithm, is proposed for high-resolution phase-distortion correction in wave-front sensorless adaptive optics (WSAO). To enhance the global search and self-adaptation of stochastic parallel gradient descent (SPGD) algorithm, residual error and its temporal integration of performance metric are added in to incremental control signal's calculation. On the basis of the maximum fitting rate between real wave-front and corrector, a goal value of metric is set as the reference. The residual error of the metric relative to reference is transformed into proportional and integration terms to produce adaptive step size updating law of SPGD algorithm. The adaptation of step size leads blind optimization to desired goal and helps escape from local extrema. Different from conventional proportional-integral -derivative (PID) algorithm, SPPID algorithm designs incremental control signal as PI-by-D for adaptive adjustment of control law in SPGD algorithm. Experiments of high-resolution phase-distortion correction in "frozen" turbulences based on influence function coefficients optimization were carried out respectively using 128-by-128 typed spatial light modulators, photo detector and control computer. Results revealed the presented algorithm offered better performance in both cases. The step size update based on residual error and its temporal integration was justified to resolve severe local lock-in problem of SPGD algorithm used in high -resolution adaptive optics.

  11. Coaxial plasma thrusters for high specific impulse propulsion

    NASA Technical Reports Server (NTRS)

    Schoenberg, Kurt F.; Gerwin, Richard A.; Barnes, Cris W.; Henins, Ivars; Mayo, Robert; Moses, Ronald, Jr.; Scarberry, Richard; Wurden, Glen

    1991-01-01

    A fundamental basis for coaxial plasma thruster performance is presented and the steady-state, ideal MHD properties of a coaxial thruster using an annular magnetic nozzle are discussed. Formulas for power usage, thrust, mass flow rate, and specific impulse are acquired and employed to assess thruster performance. The performance estimates are compared with the observed properties of an unoptimized coaxial plasma gun. These comparisons support the hypothesis that ideal MHD has an important role in coaxial plasma thruster dynamics.

  12. Silicone oil with high specific gravity for intraocular use.

    PubMed Central

    Gabel, V P; Kampik, A; Gabel, C; Spiegel, D

    1987-01-01

    Silicone oil with a higher specific gravity than that of intraocular fluid or polydimethylsiloxane may have special indications in vitreoretinal surgery. Trifluorsiloxane is such a substance, and therefore its biological compatibility was investigated in rabbit eyes. It was found that this substance was clinically well tolerated within the observation time of up to 6 months, even if there was some neovascularisation from the inferior limbus. Histologically both an inflammatory response and tissue impregnation were more pronounced than with normal polydimethylsiloxane. Images PMID:2437955

  13. High-resolution mass-selective UV spectroscopy of pseudoephedrine: evidence for conformer-specific fragmentation.

    PubMed

    Karaminkov, R; Chervenkov, S; Delchev, V; Neusser, H J

    2011-09-01

    Using resonance-enhanced two-photon ionization spectroscopy with mass resolution of jet-cooled molecules, a low-resolution S(1) ← S(0) vibronic spectrum of pseudoephedrine was recorded at the mass channels of three distinct fragments with m/z = 58, 71, and 85. Two of the fragments, with m/z = 71 and 85, are observed for the first time for this molecule. The vibronic spectra recorded at different mass channels feature different patterns, implying that they originate from different conformers in the cold molecular beam, following conformer-specific fragmentation pathways. Highly resolved spectra of all prominent vibronic features were measured, and from their analysis based on genetic algorithms, the molecular parameters of the conformers giving rise to the respective bands have been determined. Comparing the experimental results with those obtained from high-level ab initio quantum chemistry calculations, the observed prominent vibronic bands have been assigned to originate from four distinct conformers. The conformers are separated into two groups that have different fragmentation pathways determined by the different intramolecular interactions.

  14. Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy

    SciTech Connect

    Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc; Létourneau, Mélanie; Fenster, Aaron; Pouliot, Jean

    2013-11-15

    Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then be generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the

  15. An Efficient Algorithm for Some Highly Nonlinear Fractional PDEs in Mathematical Physics

    PubMed Central

    Ahmad, Jamshad; Mohyud-Din, Syed Tauseef

    2014-01-01

    In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature. PMID:25525804

  16. Real-time high-resolution downsampling algorithm on many-core processor for spatially scalable video coding

    NASA Astrophysics Data System (ADS)

    Buhari, Adamu Muhammad; Ling, Huo-Chong; Baskaran, Vishnu Monn; Wong, KokSheik

    2015-01-01

    The progression toward spatially scalable video coding (SVC) solutions for ubiquitous endpoint systems introduces challenges to sustain real-time frame rates in downsampling high-resolution videos into multiple layers. In addressing these challenges, we put forward a hardware accelerated downsampling algorithm on a parallel computing platform. First, we investigate the principal architecture of a serial downsampling algorithm in the Joint-Scalable-Video-Model reference software to identify the performance limitations for spatially SVC. Then, a parallel multicore-based downsampling algorithm is studied as a benchmark. Experimental results for this algorithm using an 8-core processor exhibit performance speedup of 5.25× against the serial algorithm in downsampling a quantum extended graphics array at 1536p video resolution into three lower resolution layers (i.e., Full-HD at 1080p, HD at 720p, and Quarter-HD at 540p). However, the achieved speedup here does not translate into the minimum required frame rate of 15 frames per second (fps) for real-time video processing. To improve the speedup, a many-core based downsampling algorithm using the compute unified device architecture parallel computing platform is proposed. The proposed algorithm increases the performance speedup to 26.14× against the serial algorithm. Crucially, the proposed algorithm exceeds the target frame rate of 15 fps, which in turn is advantageous to the overall performance of the video encoding process.

  17. Optimization of the K-means algorithm for the solution of high dimensional instances

    NASA Astrophysics Data System (ADS)

    Pérez, Joaquín; Pazos, Rodolfo; Olivares, Víctor; Hidalgo, Miguel; Ruiz, Jorge; Martínez, Alicia; Almanza, Nelva; González, Moisés

    2016-06-01

    This paper addresses the problem of clustering instances with a high number of dimensions. In particular, a new heuristic for reducing the complexity of the K-means algorithm is proposed. Traditionally, there are two approaches that deal with the clustering of instances with high dimensionality. The first executes a preprocessing step to remove those attributes of limited importance. The second, called divide and conquer, creates subsets that are clustered separately and later their results are integrated through post-processing. In contrast, this paper proposes a new solution which consists of the reduction of distance calculations from the objects to the centroids at the classification step. This heuristic is derived from the visual observation of the clustering process of K-means, in which it was found that the objects can only migrate to adjacent clusters without crossing distant clusters. Therefore, this heuristic can significantly reduce the number of distance calculations from an object to the centroids of the potential clusters that it may be classified to. To validate the proposed heuristic, it was designed a set of experiments with synthetic and high dimensional instances. One of the most notable results was obtained for an instance of 25,000 objects and 200 dimensions, where its execution time was reduced up to 96.5% and the quality of the solution decreased by only 0.24% when compared to the K-means algorithm.

  18. A truncated Levenberg-Marquardt algorithm for the calibration of highly parameterized nonlinear models

    SciTech Connect

    Finsterle, S.; Kowalsky, M.B.

    2010-10-15

    We propose a modification to the Levenberg-Marquardt minimization algorithm for a more robust and more efficient calibration of highly parameterized, strongly nonlinear models of multiphase flow through porous media. The new method combines the advantages of truncated singular value decomposition with those of the classical Levenberg-Marquardt algorithm, thus enabling a more robust solution of underdetermined inverse problems with complex relations between the parameters to be estimated and the observable state variables used for calibration. The truncation limit separating the solution space from the calibration null space is re-evaluated during the iterative calibration process. In between these re-evaluations, fewer forward simulations are required, compared to the standard approach, to calculate the approximate sensitivity matrix. Truncated singular values are used to calculate the Levenberg-Marquardt parameter updates, ensuring that safe small steps along the steepest-descent direction are taken for highly correlated parameters of low sensitivity, whereas efficient quasi-Gauss-Newton steps are taken for independent parameters with high impact. The performance of the proposed scheme is demonstrated for a synthetic data set representing infiltration into a partially saturated, heterogeneous soil, where hydrogeological, petrophysical, and geostatistical parameters are estimated based on the joint inversion of hydrological and geophysical data.

  19. Algorithms for Low-Cost High Accuracy Geomagnetic Measurements in LEO

    NASA Astrophysics Data System (ADS)

    Beach, T. L.; Zesta, E.; Allen, L.; Chepko, A.; Bonalsky, T.; Wendel, D. E.; Clavier, O.

    2013-12-01

    Geomagnetic field measurements are a fundamental, key parameter measurement for any space weather application, particularly for tracking the electromagnetic energy input in the Ionosphere-Thermosphere system and for high latitude dynamics governed by the large-scale field-aligned currents. The full characterization of the Magnetosphere-Ionosphere-Thermosphere coupled system necessitates measurements with higher spatial/temporal resolution and from multiple locations simultaneously. This becomes extremely challenging in the current state of shrinking budgets. Traditionally, including a science-grade magnetometer in a mission necessitates very costly integration and design (sensor on long boom) and imposes magnetic cleanliness restrictions on all components of the bus and payload. This work presents an innovative algorithm approach that enables high quality magnetic field measurements by one or more high-quality magnetometers mounted on the spacecraft without booms. The algorithm estimates the background field using multiple magnetometers and current telemetry on board a spacecraft. Results of a hardware-in-the-loop simulation showed an order of magnitude reduction in the magnetic effects of spacecraft onboard time-varying currents--from 300 nT to an average residual of 15 nT.

  20. Context-specific selection of algorithms for recursive feature tracking in endoscopic image using a new methodology.

    PubMed

    Selka, F; Nicolau, S; Agnus, V; Bessaid, A; Marescaux, J; Soler, L

    2015-03-01

    In minimally invasive surgery, the tracking of deformable tissue is a critical component for image-guided applications. Deformation of the tissue can be recovered by tracking features using tissue surface information (texture, color,...). Recent work in this field has shown success in acquiring tissue motion. However, the performance evaluation of detection and tracking algorithms on such images are still difficult and are not standardized. This is mainly due to the lack of ground truth data on real data. Moreover, in order to avoid supplementary techniques to remove outliers, no quantitative work has been undertaken to evaluate the benefit of a pre-process based on image filtering, which can improve feature tracking robustness. In this paper, we propose a methodology to validate detection and feature tracking algorithms, using a trick based on forward-backward tracking that provides an artificial ground truth data. We describe a clear and complete methodology to evaluate and compare different detection and tracking algorithms. In addition, we extend our framework to propose a strategy to identify the best combinations from a set of detector, tracker and pre-process algorithms, according to the live intra-operative data. Experimental results have been performed on in vivo datasets and show that pre-process can have a strong influence on tracking performance and that our strategy to find the best combinations is relevant for a reasonable computation cost.

  1. A protein multiplex microarray substrate with high sensitivity and specificity

    PubMed Central

    Fici, Dolores A.; McCormick, William; Brown, David W.; Herrmann, John E.; Kumar, Vikram; Awdeh, Zuheir L.

    2010-01-01

    The problems that have been associated with protein multiplex microarray immunoassay substrates and existing technology platforms include: binding, sensitivity, a low signal to noise ratio, target immobilization and the optimal simultaneous detection of diverse protein targets. Current commercial substrates for planar multiplex microarrays rely on protein attachment chemistries that range from covalent attachment to affinity ligand capture, to simple adsorption. In this pilot study, experimental performance parameters for direct monoclonal mouse IgG detection were compared for available two and three dimensional slide surface coatings with a new colloidal nitrocellulose substrate. New technology multiplex microarrays were also developed and evaluated for the detection of pathogen specific antibodies in human serum and the direct detection of enteric viral antigens. Data supports the nitrocellulose colloid as an effective reagent with the capacity to immobilize sufficient diverse protein target quantities for increased specificory signal without compromising authentic protein structure. The nitrocellulose colloid reagent is compatible with the array spotters and scanners routinely used for microarray preparation and processing. More importantly, as an alternate to fluorescence, colorimetric chemistries may be used for specific and sensitive protein target detection. The advantages of the nitrocellulose colloid platform indicate that this technology may be a valuable tool for the further development and expansion of multiplex microarray immunoassays in both the clinical and research laborat environment. PMID:20974147

  2. Simulation of Trajectories for High Specific Impulse Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, Tara; Adams, Robert B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Difficulties in approximating flight times and deliverable masses for continuous thrust propulsion systems have complicated comparison and evaluation of proposed propulsion concepts. These continuous thrust propulsion systems are of interest to many groups, not the least of which are the electric propulsion and fusion communities. Several charts plotting the results of well-known trajectory simulation codes were developed and are contained in this paper. These charts illustrate the dependence of time of flight and payload ratio on jet power, initial mass, specific impulse and specific power. These charts are intended to be a tool by which people in the propulsion community can explore the possibilities of their propulsion system concepts. Trajectories were simulated using the tools VARITOP and IPOST. VARITOP is a well known trajectory optimization code that involves numerical integration based on calculus of variations. IPOST has several methods of trajectory simulation; the one used in this paper is Cowell's method for full integration of the equations of motion. The analytical method derived in the companion paper was also used to simulate the trajectory. The accuracy of this method is discussed in the paper.

  3. High Spectral Resolution MODIS Algorithms for Ocean Chlorophyll in Case II Waters

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    2004-01-01

    The Case 2 chlorophyll a algorithm is based on a semi-analytical, bio-optical model of remote sensing reflectance, R(sub rs)(lambda), where R(sub rs)(lambda) is defined as the water-leaving radiance, L(sub w)(lambda), divided by the downwelling irradiance just above the sea surface, E(sub d)(lambda,0(+)). The R(sub rs)(lambda) model (Section 3) has two free variables, the absorption coefficient due to phytoplankton at 675 nm, a(sub phi)(675), and the absorption coefficient due to colored dissolved organic matter (CDOM) or gelbstoff at 400 nm, a(sub g)(400). The R(rs) model has several parameters that are fixed or can be specified based on the region and season of the MODIS scene. These control the spectral shapes of the optical constituents of the model. R(sub rs)(lambda(sub i)) values from the MODIS data processing system are placed into the model, the model is inverted, and a(sub phi)(675), a(sub g)(400) (MOD24), and chlorophyll a (MOD21, Chlor_a_3) are computed. Algorithm development is initially focused on tropical, subtropical, and summer temperate environments, and the model is parameterized in Section 4 for three different bio-optical domains: (1) high ratios of photoprotective pigments to chlorophyll and low self-shading, which for brevity, we designate as 'unpackaged'; (2) low ratios and high self-shading, which we designate as 'packaged'; and (3) a transitional or global-average type. These domains can be identified from space by comparing sea-surface temperature to nitrogen-depletion temperatures for each domain (Section 5). Algorithm errors of more than 45% are reduced to errors of less than 30% with this approach, with the greatest effect occurring at the eastern and polar boundaries of the basins. Section 6 provides an expansion of bio-optical domains into high-latitude waters. The 'fully packaged' pigment domain is introduced in this section along with a revised strategy for implementing these variable packaging domains. Chlor_a_3 values derived semi

  4. Micro-channel-based high specific power lithium target

    NASA Astrophysics Data System (ADS)

    Mastinu, P.; Martın-Hernández, G.; Praena, J.; Gramegna, F.; Prete, G.; Agostini, P.; Aiello, A.; Phoenix, B.

    2016-11-01

    A micro-channel-based heat sink has been produced and tested. The device has been developed to be used as a Lithium target for the LENOS (Legnaro Neutron Source) facility and for the production of radioisotope. Nevertheless, applications of such device can span on many areas: cooling of electronic devices, diode laser array, automotive applications etc. The target has been tested using a proton beam of 2.8MeV energy and delivering total power shots from 100W to 1500W with beam spots varying from 5mm2 to 19mm2. Since the target has been designed to be used with a thin deposit of lithium and since lithium is a low-melting-point material, we have measured that, for such application, a specific power of about 3kW/cm2 can be delivered to the target, keeping the maximum surface temperature not exceeding 150° C.

  5. BayesCall: A model-based base-calling algorithm for high-throughput short-read sequencing.

    PubMed

    Kao, Wei-Chun; Stevens, Kristian; Song, Yun S

    2009-10-01

    Extracting sequence information from raw images of fluorescence is the foundation underlying several high-throughput sequencing platforms. Some of the main challenges associated with this technology include reducing the error rate, assigning accurate base-specific quality scores, and reducing the cost of sequencing by increasing the throughput per run. To demonstrate how computational advancement can help to meet these challenges, a novel model-based base-calling algorithm, BayesCall, is introduced for the Illumina sequencing platform. Being founded on the tools of statistical learning, BayesCall is flexible enough to incorporate various features of the sequencing process. In particular, it can easily incorporate time-dependent parameters and model residual effects. This new approach significantly improves the accuracy over Illumina's base-caller Bustard, particularly in the later cycles of a sequencing run. For 76-cycle data on a standard viral sample, phiX174, BayesCall improves Bustard's average per-base error rate by approximately 51%. The probability of observing each base can be readily computed in BayesCall, and this probability can be transformed into a useful base-specific quality score with a high discrimination ability. A detailed study of BayesCall's performance is presented here. PMID:19661376

  6. A high precision phase reconstruction algorithm for multi-laser guide stars adaptive optics

    NASA Astrophysics Data System (ADS)

    He, Bin; Hu, Li-Fa; Li, Da-Yu; Xu, Huan-Yu; Zhang, Xing-Yun; Wang, Shao-Xin; Wang, Yu-Kun; Yang, Cheng-Liang; Cao, Zhao-Liang; Mu, Quan-Quan; Lu, Xing-Hai; Xuan, Li

    2016-09-01

    Adaptive optics (AO) systems are widespread and considered as an essential part of any large aperture telescope for obtaining a high resolution imaging at present. To enlarge the imaging field of view (FOV), multi-laser guide stars (LGSs) are currently being investigated and used for the large aperture optical telescopes. LGS measurement is necessary and pivotal to obtain the cumulative phase distortion along a target in the multi-LGSs AO system. We propose a high precision phase reconstruction algorithm to estimate the phase for a target with an uncertain turbulence profile based on the interpolation. By comparing with the conventional average method, the proposed method reduces the root mean square (RMS) error from 130 nm to 85 nm with a 30% reduction for narrow FOV. We confirm that such phase reconstruction algorithm is validated for both narrow field AO and wide field AO. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174274, 11174279, 61205021, 11204299, 61475152, and 61405194) and State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences.

  7. The Optimized Block-Regression Fusion Algorithm for Pansharpening of Very High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, J. X.; Yang, J. H.; Reinartz, P.

    2016-06-01

    Pan-sharpening of very high resolution remotely sensed imagery need enhancing spatial details while preserving spectral characteristics, and adjusting the sharpened results to realize the different emphases between the two abilities. In order to meet the requirements, this paper is aimed at providing an innovative solution. The block-regression-based algorithm (BR), which was previously presented for fusion of SAR and optical imagery, is firstly applied to sharpen the very high resolution satellite imagery, and the important parameter for adjustment of fusion result, i.e., block size, is optimized according to the two experiments for Worldview-2 and QuickBird datasets in which the optimal block size is selected through the quantitative comparison of the fusion results of different block sizes. Compared to five fusion algorithms (i.e., PC, CN, AWT, Ehlers, BDF) in fusion effects by means of quantitative analysis, BR is reliable for different data sources and can maximize enhancement of spatial details at the expense of a minimum spectral distortion.

  8. Monte Carlo cluster algorithm for fluid phase transitions in highly size-asymmetrical binary mixtures.

    PubMed

    Ashton, Douglas J; Liu, Jiwen; Luijten, Erik; Wilding, Nigel B

    2010-11-21

    Highly size-asymmetrical fluid mixtures arise in a variety of physical contexts, notably in suspensions of colloidal particles to which much smaller particles have been added in the form of polymers or nanoparticles. Conventional schemes for simulating models of such systems are hamstrung by the difficulty of relaxing the large species in the presence of the small one. Here we describe how the rejection-free geometrical cluster algorithm of Liu and Luijten [J. Liu and E. Luijten, Phys. Rev. Lett. 92, 035504 (2004)] can be embedded within a restricted Gibbs ensemble to facilitate efficient and accurate studies of fluid phase behavior of highly size-asymmetrical mixtures. After providing a detailed description of the algorithm, we summarize the bespoke analysis techniques of [Ashton et al., J. Chem. Phys. 132, 074111 (2010)] that permit accurate estimates of coexisting densities and critical-point parameters. We apply our methods to study the liquid-vapor phase diagram of a particular mixture of Lennard-Jones particles having a 10:1 size ratio. As the reservoir volume fraction of small particles is increased in the range of 0%-5%, the critical temperature decreases by approximately 50%, while the critical density drops by some 30%. These trends imply that in our system, adding small particles decreases the net attraction between large particles, a situation that contrasts with hard-sphere mixtures where an attractive depletion force occurs.

  9. Crystal Symmetry Algorithms in a High-Throughput Framework for Materials

    NASA Astrophysics Data System (ADS)

    Taylor, Richard

    The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.

  10. A high precision phase reconstruction algorithm for multi-laser guide stars adaptive optics

    NASA Astrophysics Data System (ADS)

    He, Bin; Hu, Li-Fa; Li, Da-Yu; Xu, Huan-Yu; Zhang, Xing-Yun; Wang, Shao-Xin; Wang, Yu-Kun; Yang, Cheng-Liang; Cao, Zhao-Liang; Mu, Quan-Quan; Lu, Xing-Hai; Xuan, Li

    2016-09-01

    Adaptive optics (AO) systems are widespread and considered as an essential part of any large aperture telescope for obtaining a high resolution imaging at present. To enlarge the imaging field of view (FOV), multi-laser guide stars (LGSs) are currently being investigated and used for the large aperture optical telescopes. LGS measurement is necessary and pivotal to obtain the cumulative phase distortion along a target in the multi-LGSs AO system. We propose a high precision phase reconstruction algorithm to estimate the phase for a target with an uncertain turbulence profile based on the interpolation. By comparing with the conventional average method, the proposed method reduces the root mean square (RMS) error from 130 nm to 85 nm with a 30% reduction for narrow FOV. We confirm that such phase reconstruction algorithm is validated for both narrow field AO and wide field AO. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174274, 11174279, 61205021, 11204299, 61475152, and 61405194) and State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences.

  11. Defining and Evaluating Classification Algorithm for High-Dimensional Data Based on Latent Topics

    PubMed Central

    Luo, Le; Li, Li

    2014-01-01

    Automatic text categorization is one of the key techniques in information retrieval and the data mining field. The classification is usually time-consuming when the training dataset is large and high-dimensional. Many methods have been proposed to solve this problem, but few can achieve satisfactory efficiency. In this paper, we present a method which combines the Latent Dirichlet Allocation (LDA) algorithm and the Support Vector Machine (SVM). LDA is first used to generate reduced dimensional representation of topics as feature in VSM. It is able to reduce features dramatically but keeps the necessary semantic information. The Support Vector Machine (SVM) is then employed to classify the data based on the generated features. We evaluate the algorithm on 20 Newsgroups and Reuters-21578 datasets, respectively. The experimental results show that the classification based on our proposed LDA+SVM model achieves high performance in terms of precision, recall and F1 measure. Further, it can achieve this within a much shorter time-frame. Our process improves greatly upon the previous work in this field and displays strong potential to achieve a streamlined classification process for a wide range of applications. PMID:24416136

  12. Reprint of "pFind-Alioth: A novel unrestricted database search algorithm to improve the interpretation of high-resolution MS/MS data".

    PubMed

    Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min

    2015-11-01

    Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested.This article is part of a Special Issue entitled: Computational Proteomics.

  13. pFind-Alioth: A novel unrestricted database search algorithm to improve the interpretation of high-resolution MS/MS data.

    PubMed

    Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min

    2015-07-01

    Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested.

  14. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  15. Some algorithmic issues in full-waveform inversion of teleseismic data for high-resolution lithospheric imaging

    NASA Astrophysics Data System (ADS)

    Monteiller, Vadim; Beller, Stephen; Nolet, Guust; Operto, Stephane; Brossier, Romain; Métivier, Ludovic; Paul, Anne; Virieux, Jean

    2014-05-01

    The current development of dense seismic arrays and high performance computing make feasible today application of full-waveform inversion (FWI) on teleseismic data for high-resolution lithospheric imaging. In teleseismic configuration, the source is to first-order a plane-wave that impinges the base of the lithospheric target located below the receiver array. In this setting, FWI aims to exploit not only the forward-scattered waves propagating up to the receiver but also second-order arrivals that are back-scattered from the free-surface and the reflectors before their recordings on the surface. FWI requires using full-wave methods modeling such as finite-difference or finite-element methods. In this framework, careful design of FWI algorithms is topical to mitigate as much as possible the computational burden of multi-source full-waveform modeling. In this presentation, we review some key specifications that might be considered for versatile FWI implementation. An abstraction level between the forward and inverse problems that allows for the interfacing of different modeling engines with the inversion. This requires the subsurface meshings that are used to perform seismic modeling and update the subsurface models during inversion to be fully independent through some back-and-forth projection processes. The subsurface parameterization should be carefully chosen during multi-parameter FWI as it controls the trade-off between parameters of different nature. A versatile FWI algorithm should be designed such that different subsurface parameterizations for the model update can be easily implemented. The gradient of the misfit function should be computed as easily as possible with the adjoint-state method in parallel environment. This first requires the gradient to be independent to the discretization method that is used to perform seismic modeling. Second, the incident and adjoint wavefields should be computed with the same numerical scheme, even if the forward problem

  16. The gas dynamics of fluids of high specific heat

    NASA Astrophysics Data System (ADS)

    Meier, Gerd E. A.; Mueller, E.-A.

    1987-12-01

    Effects in the gas dynamics of real fluids other than those in ideal substances are reviewed. Complete adiabatic liquefaction and evaporation are possible for substances whose specific heat exceeds a limit of 20 gas constants. These fluids, consisting of large molecules, have so much internal energy storage capacity in their vibrational degrees of freedom that the heat of evaporation can be supplied or also stored in the case of condensation. Thus liquefaction shock waves, which transform a gas completely or partly into a liquid, are possible. In compression waves and the expansion waves, wave bifurcations which are caused by the splitting of isentropes at the phase boundaries can be observed. In steady flow in nozzles and free jets, Mach number discontinuities arise because of phase transition. The discontinuities are not accompanied by a pressure jump and are characterized by a jump in the sound velocity of the fluid. This is part of the wave bifurcation, which means for steady flow a separation of the waves of different speeds at different locations of the flow nozzle or the jet. In the case of free jets of supersaturated fluids flowing out of a nozzle, the explosion-like disintegration of the jet by propagation of the evaporation wave in three dimensions and the extremely large jet angles are noted.

  17. Range-Specific High-resolution Mesoscale Model Setup

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.

    2013-01-01

    This report summarizes the findings from an AMU task to determine the best model configuration for operational use at the ER and WFF to best predict winds, precipitation, and temperature. The AMU ran test cases in the warm and cool seasons at the ER and for the spring and fall seasons at WFF. For both the ER and WFF, the ARW core outperformed the NMM core. Results for the ER indicate that the Lin microphysical scheme and the YSU PBL scheme is the optimal model configuration for the ER. It consistently produced the best surface and upper air forecasts, while performing fairly well for the precipitation forecasts. Both the Ferrier and Lin microphysical schemes in combination with the YSU PBL scheme performed well for WFF in the spring and fall seasons. The AMU has been tasked with a follow-on modeling effort to recommended local DA and numerical forecast model design optimized for both the ER and WFF to support space launch activities. The AMU will determine the best software and type of assimilation to use, as well as determine the best grid resolution for the initialization based on spatial and temporal availability of data and the wall clock run-time of the initialization. The AMU will transition from the WRF EMS to NU-WRF, a NASA-specific version of the WRF that takes advantage of unique NASA software and datasets. 37

  18. Application of artificial bee colony (ABC) algorithm in search of optimal release of Aswan High Dam

    NASA Astrophysics Data System (ADS)

    Hossain, Md S.; El-shafie, A.

    2013-04-01

    The paper presents a study on developing an optimum reservoir release policy by using ABC algorithm. The decision maker of a reservoir system always needs a guideline to operate the reservoir in an optimal way. Release curves have developed for high, medium and low inflow category that can answer how much water need to be release for a month by observing the reservoir level (storage condition). The Aswan high dam of Egypt has considered as the case study. 18 years of historical inflow data has used for simulation purpose and the general system performance measuring indices has measured. The application procedure and problem formulation of ABC is very simple and can be used in optimizing reservoir system. After using the actual historical inflow, the release policy succeeded in meeting demand for about 98% of total time period.

  19. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  20. High effective algorithm of the detection and identification of substance using the noisy reflected THz pulse

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.; Tikhomirov, Vasily V.

    2015-08-01

    Principal limitations of the standard THz-TDS method for the detection and identification are demonstrated under real conditions (at long distance of about 3.5 m and at a high relative humidity more than 50%) using neutral substances thick paper bag, paper napkins and chocolate. We show also that the THz-TDS method detects spectral features of dangerous substances even if the THz signals were measured in laboratory conditions (at distance 30-40 cm from the receiver and at a low relative humidity less than 2%); silicon-based semiconductors were used as the samples. However, the integral correlation criteria, based on SDA method, allows us to detect the absence of dangerous substances in the neutral substances. The discussed algorithm shows high probability of the substance identification and a reliability of realization in practice, especially for security applications and non-destructive testing.

  1. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-08-19

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  2. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  3. High Specific Power Motors in LN2 and LH2

    NASA Technical Reports Server (NTRS)

    Brown, Gerald V.; Jansen, Ralph H.; Trudell, Jeffrey J.

    2007-01-01

    A switched reluctance motor has been operated in liquid nitrogen (LN2) with a power density as high as that reported for any motor or generator. The high performance stems from the low resistivity of Cu at LN2 temperature and from the geometry of the windings, the combination of which permits steady-state rms current density up to 7000 A/cm2, about 10 times that possible in coils cooled by natural convection at room temperature. The Joule heating in the coils is conducted to the end turns for rejection to the LN2 bath. Minimal heat rejection occurs in the motor slots, preserving that region for conductor. In the end turns, the conductor layers are spaced to form a heat-exchanger-like structure that permits nucleate boiling over a large surface area. Although tests were performed in LN2 for convenience, this motor was designed as a prototype for use with liquid hydrogen (LH2) as the coolant. End-cooled coils would perform even better in LH2 because of further increases in copper electrical and thermal conductivities. Thermal analyses comparing LN2 and LH2 cooling are presented verifying that end-cooled coils in LH2 could be either much longer or could operate at higher current density without thermal runaway than in LN2.

  4. High Specific Power Motors in LN2 and LH2

    NASA Technical Reports Server (NTRS)

    Brown, Gerald V.; Jansen, Ralph H.; Trudell, Jeffrey J.

    2007-01-01

    A switched reluctance motor has been operated in liquid nitrogen (LN2) with a power density as high as that reported for any motor or generator. The high performance stems from the low resistivity of Cu at LN2 temperature and from the geometry of the windings, the combination of which permits steady-state rms current density up to 7000 A/sq cm, about 10 times that possible in coils cooled by natural convection at room temperature. The Joule heating in the coils is conducted to the end turns for rejection to the LN2 bath. Minimal heat rejection occurs in the motor slots, preserving that region for conductor. In the end turns, the conductor layers are spaced to form a heat-exchanger-like structure that permits nucleate boiling over a large surface area. Although tests were performed in LN2 for convenience, this motor was designed as a prototype for use with liquid hydrogen (LH2) as the coolant. End-cooled coils would perform even better in LH2 because of further increases in copper electrical and thermal conductivities. Thermal analyses comparing LN2 and LH2 cooling are presented verifying that end-cooled coils in LH2 could be either much longer or could operate at higher current density without thermal runaway than in LN2.

  5. Design of a high-sensitivity classifier based on a genetic algorithm: application to computer-aided diagnosis

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chan, Heang-Ping; Petrick, Nicholas; Helvie, Mark A.; Goodsitt, Mitchell M.

    1998-10-01

    A genetic algorithm (GA) based feature selection method was developed for the design of high-sensitivity classifiers, which were tailored to yield high sensitivity with high specificity. The fitness function of the GA was based on the receiver operating characteristic (ROC) partial area index, which is defined as the average specificity above a given sensitivity threshold. The designed GA evolved towards the selection of feature combinations which yielded high specificity in the high-sensitivity region of the ROC curve, regardless of the performance at low sensitivity. This is a desirable quality of a classifier used for breast lesion characterization, since the focus in breast lesion characterization is to diagnose correctly as many benign lesions as possible without missing malignancies. The high-sensitivity classifier, formulated as the Fisher's linear discriminant using GA-selected feature variables, was employed to classify 255 biopsy-proven mammographic masses as malignant or benign. The mammograms were digitized at a pixel size of mm, and regions of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist. A recently developed image transformation technique, referred to as the rubber-band straightening transform, was applied to the ROIs. Texture features extracted from the spatial grey-level dependence and run-length statistics matrices of the transformed ROIs were used to distinguish malignant and benign masses. The classification accuracy of the high-sensitivity classifier was compared with that of linear discriminant analysis with stepwise feature selection . With proper GA training, the ROC partial area of the high-sensitivity classifier above a true-positive fraction of 0.95 was significantly larger than that of Highly specific transgene expression mediated by a complex adenovirus vector incorporating a prostate-specific amplification feedback loop

    PubMed Central

    Woraratanadharm, Jan; Rubinchik, Semyon; Yu, Hong; Fan, Fan; Morrow, Scotty M.; Dong., John Y.

    2007-01-01

    Summary Development of novel therapeutic agents is needed to address the problems of locally recurrent, metastatic, and advanced hormone-refractory prostate cancer. We have constructed a novel complex adenovirus (Ad) vector regulation system that incorporates both the prostate-specific ARR2PB promoter and a positive feedback loop using the TRE promoter to enhance gene expression. This regulation strategy involves the incorporation of the TRE upstream of the prostate-specific ARR2PB promoter to enhance its activity with Tet-regulation. The expressions of both GFP and tTA were placed under the control of these TRE-ARR2PB promoters, so that in the cells of prostate origin, a positive feedback loop would be generated. This design greatly enhanced GFP reporter expression in prostate cancer cells, while retaining tight control of expression in non-prostate cancer cells, even at MOI as high as 1000. This novel positive feedback loop with prostate specificity (PFLPS) regulation system we have developed may have broad applications for expressing not only high levels of toxic proteins in cancer cells but alternatively could be manipulated to regulate essential genes in a highly efficient conditionally replicative adenovirus (CRAd) vector specifically directed to prostate cancer cells. The PFLPS regulation system, therefore, serves as a promising new approach in the development of both a specific and effective vector for cancer gene therapy. PMID:15229631

  6. A high-throughput screening for phosphatases using specific substrates.

    PubMed

    Senn, Alejandro M; Wolosiuk, Ricardo A

    2005-04-01

    A high-throughput screening was developed for the detection of phosphatase activity in bacterial colonies. Unlike other methods, the current procedure can be applied to any phosphatase because it uses physiological substrates and detects the compelled product of all phosphatase reactions, that is, orthophosphate. In this method, substrates diffuse from a filter paper across a nitrocellulose membrane to bacterial colonies situated on the opposite face, and then reaction products flow back to the paper. Finally, a colorimetric reagent discloses the presence of orthophosphate in the filter paper. We validated the performance of this assay with several substrates and experimental conditions and with different phosphatases, including a library of randomly mutagenized rapeseed chloroplast fructose-1,6-bisphosphatase. This procedure could be extended to other enzymatic activities provided that an appropriate detection of reaction products is available.

  7. Specific Analysis of Web Camera and High Resolution Planetary Imaging

    NASA Astrophysics Data System (ADS)

    Park, Youngsik; Lee, Dongju; Jin, Ho; Han, Wonyong; Park, Jang-Hyun

    2006-12-01

    Web camera is usually used for video communication between PC, it has small sensing area, cannot using long exposure application, so that is insufficient for astronomical application. But web camera is suitable for bright planet, moon, it doesn't need long exposure time. So many amateur astronomer using web camera for planetary imaging. We used ToUcam manufactured by Phillips for planetary imaging and Registax commercial program for a video file combining. And then, we are measure a property of web camera, such as linearity, gain that is usually using for analysis of CCD performance. Because of using combine technic selected high quality image from video frame, this method can take higher resolution planetary imaging than one shot image by film, digital camera and CCD. We describe a planetary observing method and a video frame combine method.

  8. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  9. The wrapper: a surface optimization algorithm that preserves highly curved areas

    NASA Astrophysics Data System (ADS)

    Gueziec, Andre P.; Dean, David

    1994-09-01

    Software to construct polygonal models of anatomical structures embedded as isosurfaces in 3D medical images has been available since the mid 1970s. Such models are used for visualization, simulation, measurements (single and multi-modality image registration), and statistics. When working with standard MR- or CT-scans, the surface obtained can contain several million triangles. These models contain data an order of magnitude larger than that which can be efficiently handled by current workstations or transmitted through networks. These algorithms generally ignore efficient combinations that would produce fewer, well shaped triangles. An efficient algorithm must not create a larger data structure than present in the raw data. Recently, much research has been done on the simplification and optimization of surfaces ([Moore and Warren, 1991]); [Schroeder et al., 1992]; [Turk, 1992]; [Hoppe et al., 1993]; [Kalvin and Taylor, 1994]). All of these algorithms satisfy two criteria, consistency and accuracy, to some degree. Consistent simplification occurs via predictable patterns. Accuracy is measured in terms of fidelity to the original surface, and is a prerequisite for collecting reliable measurements from the simplified surface. We describe the 'Wrapper' algorithm that simplifies triangulated surfaces while preserving the same topological characteristics. We employ the same simplification operation in all cases. However, simplification is restricted but not forbidden in high curvature areas. This hierarchy of operations results in homogeneous triangle aspect and size. Images undergoing compression ratios between 10 and 20:1 are visually identical to full resolution images. More importantly, the metric accuracy of the simplified surfaces appears to be unimpaired. Measurements based upon 'ridge curves; (sensu [Cutting et al., 1993]) extracted on polygonal models were recently introduced [Ayache et al., 1993]. We compared ridge curves digitized from full resolution

  10. Shadow Detection from Very High Resoluton Satellite Image Using Grabcut Segmentation and Ratio-Band Algorithms

    NASA Astrophysics Data System (ADS)

    Kadhim, N. M. S. M.; Mourshed, M.; Bray, M. T.

    2015-03-01

    Very-High-Resolution (VHR) satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour), the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates significant performance of

  11. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    SciTech Connect

    Snyder, Abigail C.; Jiao, Yu

    2010-10-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  12. Shift and Mean Algorithm for Functional Imaging with High Spatio-Temporal Resolution.

    PubMed

    Rama, Sylvain

    2015-01-01

    Understanding neuronal physiology requires to record electrical activity in many small and remote compartments such as dendrites, axon or dendritic spines. To do so, electrophysiology has long been the tool of choice, as it allows recording very subtle and fast changes in electrical activity. However, electrophysiological measurements are mostly limited to large neuronal compartments such as the neuronal soma. To overcome these limitations, optical methods have been developed, allowing the monitoring of changes in fluorescence of fluorescent reporter dyes inserted into the neuron, with a spatial resolution theoretically only limited by the dye wavelength and optical devices. However, the temporal and spatial resolutive power of functional fluorescence imaging of live neurons is often limited by a necessary trade-off between image resolution, signal to noise ratio (SNR) and speed of acquisition. Here, I propose to use a Super-Resolution Shift and Mean (S&M) algorithm previously used in image computing to improve the SNR, time sampling and spatial resolution of acquired fluorescent signals. I demonstrate the benefits of this methodology using two examples: voltage imaging of action potentials (APs) in soma and dendrites of CA3 pyramidal cells and calcium imaging in the dendritic shaft and spines of CA3 pyramidal cells. I show that this algorithm allows the recording of a broad area at low speed in order to achieve a high SNR, and then pick the signal in any small compartment and resample it at high speed. This method allows preserving both the SNR and the temporal resolution of the signal, while acquiring the original images at high spatial resolution.

  13. Shift and Mean Algorithm for Functional Imaging with High Spatio-Temporal Resolution

    PubMed Central

    Rama, Sylvain

    2015-01-01

    Understanding neuronal physiology requires to record electrical activity in many small and remote compartments such as dendrites, axon or dendritic spines. To do so, electrophysiology has long been the tool of choice, as it allows recording very subtle and fast changes in electrical activity. However, electrophysiological measurements are mostly limited to large neuronal compartments such as the neuronal soma. To overcome these limitations, optical methods have been developed, allowing the monitoring of changes in fluorescence of fluorescent reporter dyes inserted into the neuron, with a spatial resolution theoretically only limited by the dye wavelength and optical devices. However, the temporal and spatial resolutive power of functional fluorescence imaging of live neurons is often limited by a necessary trade-off between image resolution, signal to noise ratio (SNR) and speed of acquisition. Here, I propose to use a Super-Resolution Shift and Mean (S&M) algorithm previously used in image computing to improve the SNR, time sampling and spatial resolution of acquired fluorescent signals. I demonstrate the benefits of this methodology using two examples: voltage imaging of action potentials (APs) in soma and dendrites of CA3 pyramidal cells and calcium imaging in the dendritic shaft and spines of CA3 pyramidal cells. I show that this algorithm allows the recording of a broad area at low speed in order to achieve a high SNR, and then pick the signal in any small compartment and resample it at high speed. This method allows preserving both the SNR and the temporal resolution of the signal, while acquiring the original images at high spatial resolution. PMID:26635526

  14. A novel small area fast block matching algorithm based on high-accuracy gyro in digital image stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhao, Yuejin; Yu, Fei; Zhu, Weiwen; Lang, Guanqing; Dong, Liquan

    2010-11-01

    This paper presents a novel fast block matching algorithm based on high-accuracy Gyro for steadying shaking image. It acquires motion vector from Gyro firstly. Then determines searching initial position and divides image motion into three modes of small, medium and large using the motion vector from Gyro. Finally, fast block matching algorithm is designed by improving four types of templates (square, diamond, hexagon, octagon). Experimental result shows that the algorithm can speed up 50% over common method (such as NTSS, FSS, DS) and maintain the same accuracy.

  15. Improved estimates of boreal Fire Radiative Energy using high temporal resolution data and a modified active fire detection algorithm

    NASA Astrophysics Data System (ADS)

    Barrett, Kirsten

    2016-04-01

    Reliable estimates of biomass combusted during wildfires can be obtained from satellite observations of fire radiative power (FRP). Total fire radiative energy (FRE) is typically estimated by integrating instantaneous measurements of fire radiative power (FRP) at the time of orbital satellite overpass or geostationary observation. Remotely-sensed FRP products from orbital satellites are usually global in extent, requiring several thresholding and filtering operations to reduce the number of false fire detections. Some filters required for a global product may not be appropriate to fire detection in the boreal forest resulting in errors of omission and increased data processing times. We evaluate the effect of a boreal-specific active fire detection algorithm and estimates of FRP/FRE. Boreal fires are more likely to escape detection due to lower intensity smouldering combustion and sub canopy fires, therefore improvements in boreal fire detection could substantially reduce the uncertainty of emissions from biomass combustion in the region. High temporal resolution data from geostationary satellites have led to improvements in FRE estimation in tropical and temperate forests, but such a perspective is not possible for high latitude ecosystems given the equatorial orbit of geostationary observation. The increased density of overpasses in high latitudes from polar-orbiting satellites, however, may provide adequate temporal sampling for estimating FRE.

  16. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  17. Edge Polynomial Fractal Compression Algorithm for High Quality Video Transmission. Final report

    SciTech Connect

    Lin, Freddie

    1999-06-01

    In this final report, Physical Optics Corporation (POC) provides a review of its Edge Polynomial Autonomous Compression (EPAC) technology. This project was undertaken to meet the need for low bandwidth transmission of full-motion video images. In addition, this report offers a synopsis of the logical data representation study that was performed to compress still images and video. The mapping singularities and polynomial representation of 3-D surfaces were found to be ideal for very high image compression. Our efforts were then directed to extending the EPAC algorithm for the motion of singularities by tracking the 3-D coordinates of characteristic points and the development of system components. Finally, we describe the integration of the software with the hardware components. This process consists of acquiring and processing each separate camera view, combining the information from different cameras to calculate the location of an object in three dimensions, and tracking the information history and the behavior of the objects.

  18. Springback Control of Sheet Metal Forming Based on High Dimension Model Representation and Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Long, Tang; Hu, Wang; Yong, Cai; Lichen, Mao; Guangyao, Li

    2011-08-01

    Springback is related to multi-factors in the process of metal forming. In order to construct an accurate metamodel between technical parameters and springback, a general set of quantitative model assessment and analysis tool, termed high dimension model representations (HDMR), is applied to building metamodel. Genetic algorithm is also integrated for optimization based on metamodel. Compared with widely used metamodeling techniques, the most remarkable advantage of this method is its capacity to dramatically reduce sampling effort for learning the input-output behavior from exponential growth to polynomial level. In this work, the blank holding forces (BHFs) and corresponding key time are design variables. The final springback is well controlled by the HDMR-based metamodeling technique.

  19. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  1. Construction of high-order force-gradient algorithms for integration of motion in classical and quantum systems.

    PubMed

    Omelyan, I P; Mryglod, I M; Folk, R

    2002-08-01

    A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach. PMID:12241312

  2. Construction of high-order force-gradient algorithms for integration of motion in classical and quantum systems.

    PubMed

    Omelyan, I P; Mryglod, I M; Folk, R

    2002-08-01

    A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach.

  3. Improving a DWT-based compression algorithm for high image-quality requirement of satellite images

    NASA Astrophysics Data System (ADS)

    Thiebaut, Carole; Latry, Christophe; Camarero, Roberto; Cazanave, Grégory

    2011-10-01

    Past and current optical Earth observation systems designed by CNES are using a fixed-rate data compression processing performed at a high-rate in a pushbroom mode (also called scan-based mode). This process generates fixed-length data to the mass memory and data downlink is performed at a fixed rate too. Because of on-board memory limitations and high data rate processing needs, the rate allocation procedure is performed over a small image area called a "segment". For both PLEIADES compression algorithm and CCSDS Image Data Compression recommendation, this rate allocation is realised by truncating to the desired rate a hierarchical bitstream of coded and quantized wavelet coefficients for each segment. Because the quantisation induced by truncation of the bit planes description is the same for the whole segment, some parts of the segment have a poor image quality. These artefacts generally occur in low energy areas within a segment of higher level of energy. In order to locally correct these areas, CNES has studied "exceptional processing" targeted for DWT-based compression algorithms. According to a criteria computed for each part of the segment (called block), the wavelet coefficients can be amplified before bit-plane encoding. As usual Region of Interest handling, these multiplied coefficients will be processed earlier by the encoder than in the nominal case (without exceptional processing). The image quality improvement brought by the exceptional processing has been confirmed by visual image analysis and fidelity criteria. The complexity of the proposed improvement for on-board application has also been analysed.

  4. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    SciTech Connect

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.

  5. [The Change Detection of High Spatial Resolution Remotely Sensed Imagery Based on OB-HMAD Algorithm and Spectral Features].

    PubMed

    Chen, Qiang; Chen, Yun-hao; Jiang, Wei-guo

    2015-06-01

    The high spatial resolution remotely sensed imagery has abundant detailed information of earth surface, and the multi-temporal change detection for the high resolution remotely sensed imagery can realize the variations of geographical unit. In terms of the high spatial resolution remotely sensed imagery, the traditional remote sensing change detection algorithms have obvious defects. In this paper, learning from the object-based image analysis idea, we proposed a semi-automatic threshold selection algorithm named OB-HMAD (object-based-hybrid-MAD), on the basis of object-based image analysis and multivariate alternative detection algorithm (MAD), which used the spectral features of remotely sensed imagery into the field of object-based change detection. Additionally, OB-HMAD algorithm has been compared with other the threshold segmentation algorithms by the change detection experiment. Firstly, we obtained the image object by the multi-solution segmentation algorithm. Secondly, we got the object-based difference image object using MAD and minimum noise fraction rotation (MNF) for improving the SNR of the image object. Then, the change objects or area are classified using histogram curvature analysis (HCA) method for the semi-automatic threshold selection, which determined the threshold by calculated the maximum value of curvature of the histogram, so the HCA algorithm has better automation than other threshold segmentation algorithms. Finally, the change detection results are validated using confusion matrix with the field sample data. Worldview-2 imagery of 2012 and 2013 in case study of Beijing were used to validate the proposed OB-HMAD algorithm. The experiment results indicated that OB-HMAD algorithm which integrated the multi-channel spectral information could be effectively used in multi-temporal high resolution remotely sensed imagery change detection, and it has basically solved the "salt and pepper" problem which always exists in the pixel-based change

  6. [The Change Detection of High Spatial Resolution Remotely Sensed Imagery Based on OB-HMAD Algorithm and Spectral Features].

    PubMed

    Chen, Qiang; Chen, Yun-hao; Jiang, Wei-guo

    2015-06-01

    The high spatial resolution remotely sensed imagery has abundant detailed information of earth surface, and the multi-temporal change detection for the high resolution remotely sensed imagery can realize the variations of geographical unit. In terms of the high spatial resolution remotely sensed imagery, the traditional remote sensing change detection algorithms have obvious defects. In this paper, learning from the object-based image analysis idea, we proposed a semi-automatic threshold selection algorithm named OB-HMAD (object-based-hybrid-MAD), on the basis of object-based image analysis and multivariate alternative detection algorithm (MAD), which used the spectral features of remotely sensed imagery into the field of object-based change detection. Additionally, OB-HMAD algorithm has been compared with other the threshold segmentation algorithms by the change detection experiment. Firstly, we obtained the image object by the multi-solution segmentation algorithm. Secondly, we got the object-based difference image object using MAD and minimum noise fraction rotation (MNF) for improving the SNR of the image object. Then, the change objects or area are classified using histogram curvature analysis (HCA) method for the semi-automatic threshold selection, which determined the threshold by calculated the maximum value of curvature of the histogram, so the HCA algorithm has better automation than other threshold segmentation algorithms. Finally, the change detection results are validated using confusion matrix with the field sample data. Worldview-2 imagery of 2012 and 2013 in case study of Beijing were used to validate the proposed OB-HMAD algorithm. The experiment results indicated that OB-HMAD algorithm which integrated the multi-channel spectral information could be effectively used in multi-temporal high resolution remotely sensed imagery change detection, and it has basically solved the "salt and pepper" problem which always exists in the pixel-based change

  7. Highly specific expression of luciferase gene in lungs of naive nude mice directed by prostate-specific antigen promoter

    SciTech Connect

    Li Hongwei; Li Jinzhong; Helm, Gregory A.; Pan Dongfeng . E-mail: Dongfeng_pan@yahoo.com

    2005-09-09

    PSA promoter has been demonstrated the utility for tissue-specific toxic gene therapy in prostate cancer models. Characterization of foreign gene overexpression in normal animals elicited by PSA promoter should help evaluate therapy safety. Here we constructed an adenovirus vector (AdPSA-Luc), containing firefly luciferase gene under the control of the 5837 bp long prostate-specific antigen promoter. A charge coupled device video camera was used to non-invasively image expression of firefly luciferase in nude mice on days 3, 7, 11 after injection of 2 x 10{sup 9} PFU of AdPSA-Luc virus via tail vein. The result showed highly specific expression of the luciferase gene in lungs of mice from day 7. The finding indicates the potential limitations of the suicide gene therapy of prostate cancer based on selectivity of PSA promoter. By contrary, it has encouraging implications for further development of vectors via PSA promoter to enable gene therapy for pulmonary diseases.

  8. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  9. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  10. Examination of a genetic algorithm for the application in high-throughput downstream process development.

    PubMed

    Treier, Katrin; Berg, Annette; Diederich, Patrick; Lang, Katharina; Osberghaus, Anna; Dismer, Florian; Hubbuch, Jürgen

    2012-10-01

    Compared to traditional strategies, application of high-throughput experiments combined with optimization methods can potentially speed up downstream process development and increase our understanding of processes. In contrast to the method of Design of Experiments in combination with response surface analysis (RSA), optimization approaches like genetic algorithms (GAs) can be applied to identify optimal parameter settings in multidimensional optimizations tasks. In this article the performance of a GA was investigated applying parameters applicable in high-throughput downstream process development. The influence of population size, the design of the initial generation and selection pressure on the optimization results was studied. To mimic typical experimental data, four mathematical functions were used for an in silico evaluation. The influence of GA parameters was minor on landscapes with only one optimum. On landscapes with several optima, parameters had a significant impact on GA performance and success in finding the global optimum. Premature convergence increased as the number of parameters and noise increased. RSA was shown to be comparable or superior for simple systems and low to moderate noise. For complex systems or high noise levels, RSA failed, while GA optimization represented a robust tool for process optimization. Finally, the effect of different objective functions is shown exemplarily for a refolding optimization of lysozyme.

  11. The high performing backtracking algorithm and heuristic for the sequence-dependent setup times flowshop problem with total weighted tardiness

    NASA Astrophysics Data System (ADS)

    Zheng, Jun-Xi; Zhang, Ping; Li, Fang; Du, Guang-Long

    2016-09-01

    Although the sequence-dependent setup times flowshop problem with the total weighted tardiness minimization objective exists widely in industry, work on the problem has been scant in the existing literature. To the authors' best knowledge, the NEH?EWDD heuristic and the Iterated Greedy (IG) algorithm with descent local search have been regarded as the high performing heuristic and the state-of-the-art algorithm for the problem, which are both based on insertion search. In this article firstly, an efficient backtracking algorithm and a novel heuristic (HPIS) are presented for insertion search. Accordingly, two heuristics are introduced, one is NEH?EWDD with HPIS for insertion search, and the other is the combination of NEH?EWDD and both the two methods. Furthermore, the authors improve the IG algorithm with the proposed methods. Finally, experimental results show that both the proposed heuristics and the improved IG (IG*) significantly outperform the original ones.

  12. Extended nonlinear chirp scaling algorithm for highly squinted missile-borne synthetic aperture radar with diving acceleration

    NASA Astrophysics Data System (ADS)

    Liu, Rengli; Wang, Yanfei

    2016-04-01

    An extended nonlinear chirp scaling (NLCS) algorithm is proposed to process data of highly squinted, high-resolution, missile-borne synthetic aperture radar (SAR) diving with a constant acceleration. Due to the complex diving movement, the traditional signal model and focusing algorithm are no longer suited for missile-borne SAR signal processing. Therefore, an accurate range equation is presented, named as the equivalent hyperbolic range model (EHRM), which is more accurate and concise compared with the conventional fourth-order polynomial range equation. Based on the EHRM, a two-dimensional point target reference spectrum is derived, and an extended NLCS algorithm for missile-borne SAR image formation is developed. In the algorithm, a linear range walk correction is used to significantly remove the range-azimuth cross coupling, and an azimuth NLCS processing is adopted to solve the azimuth space variant focusing problem. Moreover, the operations of the proposed algorithm are carried out without any interpolation, thus having small computational loads. Finally, the simulation results and real-data processing results validate the proposed focusing algorithm.

  13. High-order derivative spectroscopy for selecting spectral regions and channels for remote sensing algorithm development

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.

    1999-12-01

    A remote sensing reflectance model, which describes the transfer of irradiant light within a plant canopy or water column has previously been used to simulate the nadir viewing reflectance of vegetation canopies and leaves under solar induced or an artificial light source and the water surface reflectance. Wavelength dependent features such as canopy reflectance leaf absorption and canopy bottom reflectance as well as water absorption and water bottom reflectance have been used to simulate or generate synthetic canopy and water surface reflectance signatures. This paper describes how derivative spectroscopy can be utilized to invert the synthetic or modeled as well as measured reflectance signatures with the goal of selecting the optimal spectral channels or regions of these environmental media. Specifically, in this paper synthetic and measured reflectance signatures are used for selecting vegetative dysfunction variables for different plant species. The measured reflectance signatures as well as model derived or synthetic signatures are processed using extremely fast higher order derivative processing techniques which filter the synthetic/modeled or measured spectra and automatically selects the optimal channels for automatic and direct algorithm application. The higher order derivative filtering technique makes use of a translating and dilating, derivative spectroscopy signal processing (TDDS-SPR) approach based upon remote sensing science and radiative transfer theory. Thus the technique described, unlike other signal processing techniques being developed for hyperspectral signatures and associated imagery, is based upon radiative transfer theory instead of statistical or purely mathematical operational techniques such as wavelets.

  14. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    SciTech Connect

    Xiao, Jianyuan; Qin, Hong; Liu, Jian; He, Yang; Zhang, Ruili; Sun, Yajuan

    2015-11-01

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.

  15. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    NASA Astrophysics Data System (ADS)

    Xiao, Jianyuan; Qin, Hong; Liu, Jian; He, Yang; Zhang, Ruili; Sun, Yajuan

    2015-11-01

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.

  16. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    SciTech Connect

    Xiao, Jianyuan; Liu, Jian; He, Yang; Zhang, Ruili; Qin, Hong; Sun, Yajuan

    2015-11-15

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.

  17. An inverse kinematics algorithm for a highly redundant variable-geometry-truss manipulator

    NASA Technical Reports Server (NTRS)

    Naccarato, Frank; Hughes, Peter

    1989-01-01

    A new class of robotic arm consists of a periodic sequence of truss substructures, each of which has several variable-length members. Such variable-geometry-truss manipulator (VGTMs) are inherently highly redundant and promise a significant increase in dexterity over conventional anthropomorphic manipulators. This dexterity may be exploited for both obstacle avoidance and controlled deployment in complex workspaces. The inverse kinematics problem for such unorthodox manipulators, however, becomes complex because of the large number of degrees of freedom, and conventional solutions to the inverse kinematics problem become inefficient because of the high degree of redundancy. A solution is presented to this problem based on a spline-like reference curve for the manipulator's shape. Such an approach has a number of advantages: (1) direct, intuitive manipulation of shape; (2) reduced calculation time; and (3) direct control over the effective degree of redundancy of the manipulator. Furthermore, although the algorithm was developed primarily for variable-geometry-truss manipulators, it is general enough for application to a number of manipulator designs.

  18. Immunohistochemical staining with EGFR mutation-specific antibodies: high specificity as a diagnostic marker for lung adenocarcinoma.

    PubMed

    Wen, Yong Hannah; Brogi, Edi; Hasanovic, Adnan; Ladanyi, Marc; Soslow, Robert A; Chitale, Dhananjay; Shia, Jinru; Moreira, Andre L

    2013-09-01

    We previously demonstrated a high specificity of immunohistochemistry using epidermal growth factor receptor (EGFR) mutation-specific antibodies in lung adenocarcinoma and correlation with EGFR mutation analysis. In this study, we assessed EGFR mutation status by immunohistochemistry in a variety of extrapulmonary malignancies, especially those that frequently show EGFR overexpression. Tissue microarrays containing triplicate cores of breast carcinomas (n=300), colorectal carcinomas (n=65), pancreatic adenocarcinoma (n=145), and uterine carcinosarcoma or malignant mixed müllerian tumors (n=25) were included in the study. Tissue microarray of lung adenocarcinoma with known EGFR mutation status was used as reference. Immunohistochemistry was performed using antibodies specific for the E746-A750del and L858R mutations. In pulmonary adenocarcinoma, a staining intensity of 2+ or 3+ correlates with mutation status and is therefore considered as positive. Out of 300 breast carcinomas, 293 (98%) scored 0, 5 (2%) had 1+ staining, 2 (1%) were 2+ for the L858R antibody. All breast carcinomas scored 0 with the E746-A750 antibody. All the colorectal, pancreatic carcinomas and malignant mixed müllerian tumors were negative (0) for both antibodies. Molecular analysis of the breast carcinomas that scored 2+ for L858R showed no mutation. Our results show that EGFR mutation-specific antibodies could be an additional tool distinguishing primary versus metastatic carcinomas in the lung. False-positivity can be seen in breast carcinoma but is extremely rare (1%).

  19. comets (Constrained Optimization of Multistate Energies by Tree Search): A Provable and Efficient Protein Design Algorithm to Optimize Binding Affinity and Specificity with Respect to Sequence.

    PubMed

    Hallen, Mark A; Donald, Bruce R

    2016-05-01

    Practical protein design problems require designing sequences with a combination of affinity, stability, and specificity requirements. Multistate protein design algorithms model multiple structural or binding "states" of a protein to address these requirements. comets provides a new level of versatile, efficient, and provable multistate design. It provably returns the minimum with respect to sequence of any desired linear combination of the energies of multiple protein states, subject to constraints on other linear combinations. Thus, it can target nearly any combination of affinity (to one or multiple ligands), specificity, and stability (for multiple states if needed). Empirical calculations on 52 protein design problems showed comets is far more efficient than the previous state of the art for provable multistate design (exhaustive search over sequences). comets can handle a very wide range of protein flexibility and can enumerate a gap-free list of the best constraint-satisfying sequences in order of objective function value. PMID:26761641

  20. Fine specificities of two lectins from Cymbosema roseum seeds: a lectin specific for high-mannose oligosaccharides and a lectin specific for blood group H type II trisaccharide.

    PubMed

    Dam, Tarun K; Cavada, Benildo S; Nagano, Celso S; Rocha, Bruno Am; Benevides, Raquel G; Nascimento, Kyria S; de Sousa, Luiz Ag; Oscarson, Stefan; Brewer, C Fred

    2011-07-01

    The legume species of Cymbosema roseum of Diocleinae subtribe produce at least two different seed lectins. The present study demonstrates that C. roseum lectin I (CRL I) binds with high affinity to the "core" trimannoside of N-linked oligosaccharides. Cymbosema roseum lectin II (CRL II), on the other hand, binds with high affinity to the blood group H trisaccharide (Fucα1,2Galα1-4GlcNAc-). Thermodynamic and hemagglutination inhibition studies reveal the fine binding specificities of the two lectins. Data obtained with a complete set of monodeoxy analogs of the core trimannoside indicate that CRL I recognizes the 3-, 4- and 6-hydroxyl groups of the α(1,6) Man residue, the 3- and 4-hydroxyl group of the α(1,3) Man residue and the 2- and 4-hydroxyl groups of the central Man residue of the trimannoside. CRL I possesses enhanced affinities for the Man5 oligomannose glycan and a biantennary complex glycan as well as glycoproteins containing high-mannose glycans. On the other hand, CRL II distinguishes the blood group H type II epitope from the Lewis(x), Lewis(y), Lewis(a) and Lewis(b) epitopes. CRL II also distinguishes between blood group H type II and type I trisaccharides. CRL I and CRL II, respectively, possess differences in fine specificities when compared with other reported mannose and fucose recognizing lectins. This is the first report of a mannose-specific lectin (CRL I) and a blood group H type II-specific lectin (CRL II) from seeds of a member of the Diocleinae subtribe.

  1. A non-device-specific approach to display characterization based on linear, nonlinear, and hybrid search algorithms.

    PubMed

    Ban, Hiroshi; Yamamoto, Hiroki

    2013-01-01

    In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free.

  2. A non-device-specific approach to display characterization based on linear, nonlinear, and hybrid search algorithms.

    PubMed

    Ban, Hiroshi; Yamamoto, Hiroki

    2013-01-01

    In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free. PMID:23729771

  3. PhosphoChain: a novel algorithm to predict kinase and phosphatase networks from high-throughput expression data

    PubMed Central

    Chiang, Jung-Hsien; Aitchison, John D.

    2013-01-01

    Motivation: Protein phosphorylation is critical for regulating cellular activities by controlling protein activities, localization and turnover, and by transmitting information within cells through signaling networks. However, predictions of protein phosphorylation and signaling networks remain a significant challenge, lagging behind predictions of transcriptional regulatory networks into which they often feed. Results: We developed PhosphoChain to predict kinases, phosphatases and chains of phosphorylation events in signaling networks by combining mRNA expression levels of regulators and targets with a motif detection algorithm and optional prior information. PhosphoChain correctly reconstructed ∼78% of the yeast mitogen-activated protein kinase pathway from publicly available data. When tested on yeast phosphoproteomic data from large-scale mass spectrometry experiments, PhosphoChain correctly identified ∼27% more phosphorylation sites than existing motif detection tools (NetPhosYeast and GPS2.0), and predictions of kinase–phosphatase interactions overlapped with ∼59% of known interactions present in yeast databases. PhosphoChain provides a valuable framework for predicting condition-specific phosphorylation events from high-throughput data. Availability: PhosphoChain is implemented in Java and available at http://virgo.csie.ncku.edu.tw/PhosphoChain/ or http://aitchisonlab.com/PhosphoChain Contact: john.aitchison@systemsbiology.org or jchiang@mail.ncku.edu.tw Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23832245

  4. Genetic algorithm based optimization of pulse profile for MOPA based high power fiber lasers

    NASA Astrophysics Data System (ADS)

    Zhang, Jiawei; Tang, Ming; Shi, Jun; Fu, Songnian; Li, Lihua; Liu, Ying; Cheng, Xueping; Liu, Jian; Shum, Ping

    2015-03-01

    Although the Master Oscillator Power-Amplifier (MOPA) based fiber laser has received much attention for laser marking process due to its large tunabilty of pulse duration (from 10ns to 1ms), repetition rate (100Hz to 500kHz), high peak power and extraordinary heat dissipating capability, the output pulse deformation due to the saturation effect of fiber amplifier is detrimental for many applications. We proposed and demonstrated that, by utilizing Genetic algorithm (GA) based optimization technique, the input pulse profile from the master oscillator (current-driven laser diode) could be conveniently optimized to achieve targeted output pulse shape according to real parameters' constraints. In this work, an Yb-doped high power fiber amplifier is considered and a 200ns square shaped pulse profile is the optimization target. Since the input pulse with longer leading edge and shorter trailing edge can compensate the saturation effect, linear, quadratic and cubic polynomial functions are used to describe the input pulse with limited number of unknowns(<5). Coefficients of the polynomial functions are the optimization objects. With reasonable cost and hardware limitations, the cubic input pulse with 4 coefficients is found to be the best as the output amplified pulse can achieve excellent flatness within the square shape. Considering the bandwidth constraint of practical electronics, we examined high-frequency component cut-off effect of input pulses and found that the optimized cubic input pulses with 300MHz bandwidth is still quite acceptable to satisfy the requirement for the amplified output pulse and it is feasible to establish such a pulse generator in real applications.

  5. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  6. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  7. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    PubMed Central

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  8. A polynomial phase-shift algorithm for high precision three-dimensional profilometry

    NASA Astrophysics Data System (ADS)

    Deng, Fuqin; Liu, Chang; Sze, Wuifung; Deng, Jiangwen; Fung, Kenneth S. M.; Lam, Edmund Y.

    2013-03-01

    The perspective effect is common in real optical systems using projected patterns for machine vision applications. In the past, the frequencies of these sinusoidal patterns are assumed to be uniform at different heights when reconstructing moving objects. Therefore, the error caused by a perspective projection system becomes pronounced in phase-measuring profilometry, especially for some high precision metrology applications such as measuring the surfaces of the semiconductor components at micrometer level. In this work, we investigate the perspective effect on phase-measuring profilometry when reconstructing the surfaces of moving objects. Using a polynomial to approximate the phase distribution under a perspective projection system, which we call a polynomial phase-measuring profilometry (P-PMP) model, we are able to generalize the phase-measuring profilometry model discussed in our previous work and solve the phase reconstruction problem effectively. Furthermore, we can characterize how the frequency of the projected pattern changes according to the height variations and how the phase of the projected pattern distributes in the measuring space. We also propose a polynomial phase-shift algorithm (P-PSA) to correct the phase-shift error due to perspective effect during phase reconstruction. Simulation experiments show that the proposed method can improve the reconstruction quality both visually and numerically.

  9. Simulating chemical energies to high precision with fully-scalable quantum algorithms on superconducting qubits

    NASA Astrophysics Data System (ADS)

    O'Malley, Peter; Babbush, Ryan; Kivlichan, Ian; Romero, Jhonathan; McClean, Jarrod; Tranter, Andrew; Barends, Rami; Kelly, Julian; Chen, Yu; Chen, Zijun; Jeffrey, Evan; Fowler, Austin; Megrant, Anthony; Mutus, Josh; Neill, Charles; Quintana, Christopher; Roushan, Pedram; Sank, Daniel; Vainsencher, Amit; Wenner, James; White, Theodore; Love, Peter; Aspuru-Guzik, Alan; Neven, Hartmut; Martinis, John

    Quantum simulations of molecules have the potential to calculate industrially-important chemical parameters beyond the reach of classical methods with relatively modest quantum resources. Recent years have seen dramatic progress both superconducting qubits and quantum chemistry algorithms. Here, we present experimental demonstrations of two fully-scalable algorithms for finding the dissociation energy of hydrogen: the variational quantum eigensolver and iterative phase estimation. This represents the first calculation of a dissociation energy to chemical accuracy with a non-precompiled algorithm. These results show the promise of chemistry as the ``killer app'' for quantum computers, even before the advent of full error-correction.

  10. Atorvastatin ameliorates endothelium-specific insulin resistance induced by high glucose combined with high insulin.

    PubMed

    Yang, Ou; Li, Jinliang; Chen, Haiyan; Li, Jie; Kong, Jian

    2016-09-01

    The aim of the present study was to establish an endothelial cell model of endothelium-specific insulin resistance to evaluate the effect of atorvastatin on insulin resistance-associated endothelial dysfunction and to identify the potential pathway responsible for its action. Cultured human umbilical vein endothelial cells (HUVECs) were pretreated with different concentrations of glucose with, or without, 10‑5 M insulin for 24 h, following which the cells were treated with atorvastatin. The tyrosine phosphorylation of insulin receptor (IR) and insulin receptor substrate-1 (IRS‑1), the production of nitric oxide (NO), the activity and phosphorylation level of endothelial NO synthase (eNOS) on serine1177, and the mRNA levels of endothelin‑1 (ET‑1) were assessed during the experimental procedure. Treatment of the HUVECs with 30 mM glucose and 10‑5 M insulin for 24 h impaired insulin signaling, with reductions in the tyrosine phosphorylation of IR and protein expression of IRS‑1 by almost 75 and 65%, respectively. This, in turn, decreased the activity and phosphorylation of eNOS on serine1177, and reduced the production of NO by almost 80%. By contrast, the mRNA levels of ET‑1 were upregulated. All these changes were ameliorated by atorvastatin. Taken together, these results demonstrated that high concentrations of glucose and insulin impaired insulin signaling leading to endothelial dysfunction, and that atorvastatin ameliorated these changes, acting primarily through the phosphatidylinositol 3-kinase/Akt/eNOS signaling pathway. PMID:27484094

  11. Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data

    PubMed Central

    2014-01-01

    Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the

  12. ANDI-03: a genetic algorithm tool for the analysis of activation detector data to unfold high-energy neutron spectra.

    PubMed

    Mukherjee, Bhaskar

    2004-01-01

    The thresholds of (n,xn) reactions in various activation detectors are commonly used to unfold the neutron spectra covering a broad energy span, i.e. from thermal to several hundreds of MeV. The saturation activities of the daughter nuclides (i.e. reaction products) serve as the input data of specific spectra unfolding codes, such as SAND-II and LOUHI-83. However, most spectra unfolding codes, including the above, require an a priori (guess) spectrum to starting up the unfolding procedure of an unknown spectrum. The accuracy and exactness of the resulting spectrum primarily depends on the subjectively chosen guess spectrum. On the other hand, the Genetic Algorithm (GA)-based spectra unfolding technique ANDI-03 (Activation-detector Neutron DIfferentiation) presented in this report does not require a specific starting parameter. The GA is a robust problem-solving tool, which emulates the Darwinian Theory of Evolution prevailing in the realm of biological world and is ideally suited to optimise complex objective functions globally in a large multidimensional solution space. The activation data of the 27Al(n,alpha)24Na, 116In(n,gamma)116mIn, 12C(n,2n)11C and 209Bi(n,xn)(210-x)Bi reactions recorded at the high-energy neutron field of the ISIS Spallation source (Rutherford Appleton Laboratory, UK) was obtained from literature and by applying the ANDI-03 GA tool, these data were used to unfold the neutron spectra. The total neutron fluence derived from the neutron spectrum unfolded using GA technique (ANDI-03) agreed within +/-6.9% (at shield top level) and +/-27.2% (behind a 60 cm thick concrete shield) with the same unfolded with the SAND-II code.

  13. ANDI-03: a genetic algorithm tool for the analysis of activation detector data to unfold high-energy neutron spectra.

    PubMed

    Mukherjee, Bhaskar

    2004-01-01

    The thresholds of (n,xn) reactions in various activation detectors are commonly used to unfold the neutron spectra covering a broad energy span, i.e. from thermal to several hundreds of MeV. The saturation activities of the daughter nuclides (i.e. reaction products) serve as the input data of specific spectra unfolding codes, such as SAND-II and LOUHI-83. However, most spectra unfolding codes, including the above, require an a priori (guess) spectrum to starting up the unfolding procedure of an unknown spectrum. The accuracy and exactness of the resulting spectrum primarily depends on the subjectively chosen guess spectrum. On the other hand, the Genetic Algorithm (GA)-based spectra unfolding technique ANDI-03 (Activation-detector Neutron DIfferentiation) presented in this report does not require a specific starting parameter. The GA is a robust problem-solving tool, which emulates the Darwinian Theory of Evolution prevailing in the realm of biological world and is ideally suited to optimise complex objective functions globally in a large multidimensional solution space. The activation data of the 27Al(n,alpha)24Na, 116In(n,gamma)116mIn, 12C(n,2n)11C and 209Bi(n,xn)(210-x)Bi reactions recorded at the high-energy neutron field of the ISIS Spallation source (Rutherford Appleton Laboratory, UK) was obtained from literature and by applying the ANDI-03 GA tool, these data were used to unfold the neutron spectra. The total neutron fluence derived from the neutron spectrum unfolded using GA technique (ANDI-03) agreed within +/-6.9% (at shield top level) and +/-27.2% (behind a 60 cm thick concrete shield) with the same unfolded with the SAND-II code. PMID:15353654

  14. A rain pixel recovery algorithm for videos with highly dynamic scenes.

    PubMed

    Jie Chen; Lap-Pui Chau

    2014-03-01

    Rain removal is a very useful and important technique in applications such as security surveillance and movie editing. Several rain removal algorithms have been proposed these years, where photometric, chromatic, and probabilistic properties of the rain have been exploited to detect and remove the rainy effect. Current methods generally work well with light rain and relatively static scenes, when dealing with heavier rainfall in dynamic scenes, these methods give very poor visual results. The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal informations are then adaptively exploited during rain pixel recovery. Results show that the proposed algorithm has a much better performance for rainy scenes with large motion than existing algorithms.

  15. Optimal band selection for high dimensional remote sensing data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfeng; Sun, Quan; Li, Jonathan

    2009-06-01

    A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.

  16. Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams

    SciTech Connect

    Papanikolaou, Niko; Stathakis, Sotirios

    2009-10-15

    Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.

  17. Frontal optimization algorithms for multiprocessor computers

    SciTech Connect

    Sergienko, I.V.; Gulyanitskii, L.F.

    1981-11-01

    The authors describe one of the approaches to the construction of locally optimal optimization algorithms on multiprocessor computers. Algorithms of this type, called frontal, have been realized previously on single-processor computers, although this configuration does not fully exploit the specific features of their computational scheme. Experience with a number of practical discrete optimization problems confirms that the frontal algorithms are highly successful even with single-processor computers. 9 references.

  18. Genetic Algorithm for Innovative Device Designs in High-Efficiency III-V Nitride Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Zhu, Di; Schubert, Martin F.; Cho, Jaehee; Schubert, E. Fred; Crawford, Mary H.; Koleske, Daniel D.; Shim, Hyunwook; Sone, Cheolsoo

    2012-01-01

    Light-emitting diodes are becoming the next-generation light source because of their prominent benefits in energy efficiency, versatility, and benign environmental impact. However, because of the unique polarization effects in III-V nitrides and the high complexity of light-emitting diodes, further breakthroughs towards truly optimized devices are required. Here we introduce the concept of artificial evolution into the device optimization process. Reproduction and selection are accomplished by means of an advanced genetic algorithm and device simulator, respectively. We demonstrate that this approach can lead to new device structures that go beyond conventional approaches. The innovative designs originating from the genetic algorithm and the demonstration of the predicted results by implementing structures suggested by the algorithm establish a new avenue for complex semiconductor device design and optimization.

  19. Fast and optimal multiframe blind deconvolution algorithm for high-resolution ground-based imaging of space objects.

    PubMed

    Matson, Charles L; Borelli, Kathy; Jefferies, Stuart; Beckner, Charles C; Hege, E Keith; Lloyd-Hart, Michael

    2009-01-01

    We report a multiframe blind deconvolution algorithm that we have developed for imaging through the atmosphere. The algorithm has been parallelized to a significant degree for execution on high-performance computers, with an emphasis on distributed-memory systems so that it can be hosted on commodity clusters. As a result, image restorations can be obtained in seconds to minutes. We have compared and quantified the quality of its image restorations relative to the associated Cramér-Rao lower bounds (when they can be calculated). We describe the algorithm and its parallelization in detail, demonstrate the scalability of its parallelization across distributed-memory computer nodes, discuss the results of comparing sample variances of its output to the associated Cramér-Rao lower bounds, and present image restorations obtained by using data collected with ground-based telescopes.

  20. Genetic Algorithm for Innovative Device Designs in High-Efficiency III–V Nitride Light-Emitting Diodes

    SciTech Connect

    Zhu, Di; Schubert, Martin F.; Cho, Jaehee; Schubert, E. Fred; Crawford, Mary H.; Koleske, Daniel D.; Shim, Hyunwook; Sone, Cheolsoo

    2012-01-01

    Light-emitting diodes are becoming the next-generation light source because of their prominent benefits in energy efficiency, versatility, and benign environmental impact. However, because of the unique polarization effects in III–V nitrides and the high complexity of light-emitting diodes, further breakthroughs towards truly optimized devices are required. Here we introduce the concept of artificial evolution into the device optimization process. Reproduction and selection are accomplished by means of an advanced genetic algorithm and device simulator, respectively. We demonstrate that this approach can lead to new device structures that go beyond conventional approaches. The innovative designs originating from the genetic algorithm and the demonstration of the predicted results by implementing structures suggested by the algorithm establish a new avenue for complex semiconductor device design and optimization.

  1. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  2. High specificity but low sensitivity of mutation-specific antibodies against EGFR mutations in non-small-cell lung cancer.

    PubMed

    Bondgaard, Anna-Louise; Høgdall, Estrid; Mellemgaard, Anders; Skov, Birgit G

    2014-12-01

    Determination of epidermal growth factor receptor (EGFR) mutations has a pivotal impact on treatment of non-small-cell lung cancer (NSCLC). A standardized test has not yet been approved. So far, Sanger DNA sequencing has been widely used. Its rather low sensitivity has led to the development of more sensitive methods including real-time PCR (RT-PCR). Immunohistochemistry with mutation-specific antibodies might be a promising detection method. We evaluated 210 samples with NSCLC from an unselected Caucasian population. Extracted DNA was analyzed for EGFR mutations by RT-PCR (Therascreen EGFR PCR kit, Qiagen, UK; reference method). For immunohistochemistry, antibodies against exon19 deletions (clone 6B6), exon21 mutations (clone 43B2) from Cell Signaling Technology (Boston, USA) and EGFR variantIII (clone 218C9) from Dako (Copenhagen, DK) were applied. Protein expression was evaluated, and staining score (multipum of intensity (graded 0-3) and percentages (0-100%) of stained tumor cells) was calculated. Positivity was defined as staining score >0. Specificity of exon19 antibody was 98.8% (95% confidence interval=95.9-99.9%) and of exon21 antibody 97.8% (95% confidence interval=94.4-99.4%). Sensitivity of exon19 antibody was 63.2% (95% confidence interval=38.4-83.7%) and of exon21 antibody was 80.0% (95% confidence interval=44.4-97.5%). Seven exon19 and four exon21 mutations were false negatives (immunohistochemistry negative, RT-PCR positive). Two exon19 and three exon21 mutations were false positive (immunohistochemistry positive, RT-PCR negative). One false positive exon21 mutation had staining score 300. The EGFR variantIII antibody showed no correlation to EGFR mutation status determined by RT-PCR or to EGFR immunohistochemistry. High specificity of the mutation-specific antibodies was demonstrated. However, sensitivity was low, especially for exon19 deletions, and thus these antibodies cannot yet be used as screening method for EGFR mutations in NSCLC

  3. An evaluation of SEBAL algorithm using high resolution aircraft data acquired during BEAREX07

    NASA Astrophysics Data System (ADS)

    Paul, G.; Gowda, P. H.; Prasad, V. P.; Howell, T. A.; Staggenborg, S.

    2010-12-01

    Surface Energy Balance Algorithm for Land (SEBAL) computes spatially distributed surface energy fluxes and evapotranspiration (ET) rates using a combination of empirical and deterministic equations executed in a strictly hierarchical sequence. Over the past decade SEBAL has been tested over various regions and has found its application in solving water resources and irrigation problems. This research combines high resolution remote sensing data and field measurements of the surface radiation and agro-meteorological variables to review various SEBAL steps for mapping ET in the Texas High Plains (THP). High resolution aircraft images (0.5-1.8 m) acquired during the Bushland Evapotranspiration and Agricultural Remote Sensing Experiment 2007 (BEAREX07) conducted at the USDA-ARS Conservation and Production Research Laboratory in Bushland, Texas, was utilized to evaluate the SEBAL. Accuracy of individual relationships and predicted ET were investigated using observed hourly ET rates from 4 large weighing lysimeters, each located at the center of 4.7 ha field. The uniqueness and the strength of this study come from the fact that it evaluates the SEBAL for irrigated and dryland conditions simultaneously with each lysimeter field planted to irrigated forage sorghum, irrigated forage corn, dryland clumped grain sorghum, and dryland row sorghum. Improved coefficients for the local conditions were developed for the computation of roughness length for momentum transport. The decision involved in selection of dry and wet pixels, which essentially determines the partitioning of the available energy between sensible (H) and latent (LE) heat fluxes has been discussed. The difference in roughness length referred to as the kB-1 parameter was modified in the current study. Performance of the SEBAL was evaluated using mean bias error (MBE) and root mean square error (RMSE). An RMSE of ±37.68 W m-2 and ±0.11 mm h-1 was observed for the net radiation and hourly actual ET, respectively

  4. Towards material-specific simulations of high-temperature superconducting cuprates

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas

    2006-03-01

    Simulations of high-temperature superconducting (HTSC) cuprates have typically fallen into two categories: (1) studies of generic models such as the two-dimensional (2D) Hubbard model, that are believed to capture the essential physics necessary to describe the superconducting state, and, (2) first principles electronic structure calculations that are based on the local density approximation (LDA) to density functional theory (DFT) and lead to materials specific models. With advent of massibely parallel vector supercomputers, such as the Cray X1E at ORNL, and cluster algorithms such as the Dynamical Cluster Approximation (DCA), it is now possible to systematically solve the 2D Hubbard model with Quantum Monte Carol (QMC) simulations and to establish that the model indeed describes d-wave superconductivity [1]. Furthermore, studies of a multi-band model with input parameters generated from LDA calculations demonstrate that the existence of a superconducting transition is very sensitive to the underlying band structure [2]. Application of the LDA to transition metal oxides is, however, hampered by spurious self-interactions that particularly affects localized orbitals. Here we apply the self-interaction corrected local spin-density method (SIC-LSD) to describe the electronic structure of the cuprates. It was recently applied with success to generate input parameters for simple models of Mn doped III-V semiconductors [3] and is known to properly describe the antiferromagnetic insulating ground state of the parent compounds of the HTSC cuprates. We will discus the models for HTSC cuprates derived from the SIC-LSD study and how the differences to the well-known LDA results impact the QMC-DCA simulations of the magnetic and superconducting properties. [1] T. A. Maier, M. Jarrell, T. C. Schulthess, P. R. C. Kent, and J. B. White, Phys. Rev. Lett. 95, 237001 (2005). [2] P. Kent, A. Macridin, M. Jarrell, T. Schulthess, O. Andersen, T. Dasgupta, and O. Jepsen, Bulletin of

  5. High frequency, high temperature specific core loss and dynamic B-H hysteresis loop characteristics of soft magnetic alloys

    NASA Technical Reports Server (NTRS)

    Wieserman, W. R.; Schwarze, G. E.; Niedra, J. M.

    1990-01-01

    Limited experimental data exists for the specific core loss and dynamic B-H loops for soft magnetic materials for the combined conditions of high frequency and high temperature. This experimental study investigates the specific core loss and dynamic B-H loop characteristics of Supermalloy and Metglas 2605SC over the frequency range of 1 to 50 kHz and temperature range of 23 to 300 C under sinusoidal voltage excitation. The experimental setup used to conduct the investigation is described. The effects of the maximum magnetic flux density, frequency, and temperature on the specific core loss and on the size and shape of the B-H loops are examined.

  6. Distinct tubulin dynamics in cancer cells explored using a highly tubulin-specific fluorescent probe.

    PubMed

    Zhu, Cuige; Zuo, Yinglin; Liang, Baoxia; Yue, Hong; Yue, Xin; Wen, Gesi; Wang, Ruimin; Quan, Junmin; Du, Jun; Bu, Xianzhang

    2015-09-01

    A highly specific fluorescent probe (OC9) was discovered exhibiting tubulin-specific affinity fluorescence, which allowed selective labeling of cellular tubulin in microtubules. Moreover, distinct tubulin dynamics in various cellular bio-settings such as drug resistant or epithelial-mesenchymal transition (EMT) cancer cells were directly observed for the first time via OC9 staining.

  7. Educational Specifications for the New Ridley High School as Prepared by the School's Administration and Faculty.

    ERIC Educational Resources Information Center

    1998

    This document presents educational specifications for a new high school in Ridley school District in Pennsylvania. The specifications are offered by the school's administration and faculty. They address in detail the following areas: (1) exterior (including site and building exterior); (2) building interior; (3) classrooms; (4) department centers;…

  8. High-resolution combined global gravity field modelling: Solving large kite systems using distributed computational algorithms

    NASA Astrophysics Data System (ADS)

    Zingerle, Philipp; Fecher, Thomas; Pail, Roland; Gruber, Thomas

    2016-04-01

    One of the major obstacles in modern global gravity field modelling is the seamless combination of lower degree inhomogeneous gravity field observations (e.g. data from satellite missions) with (very) high degree homogeneous information (e.g. gridded and reduced gravity anomalies, beyond d/o 1000). Actual approaches mostly combine such data only on the basis of the coefficients, meaning that previously for both observation classes (resp. models) a spherical harmonic analysis is done independently, solving dense normal equations (NEQ) for the inhomogeneous model and block-diagonal NEQs for the homogeneous. Obviously those methods are unable to identify or eliminate effects as spectral leakage due to band limitations of the models and non-orthogonality of the spherical harmonic base functions. To antagonize such problems a combination of both models on NEQ-basis is desirable. Theoretically this can be achieved using NEQ-stacking. Because of the higher maximum degree of the homogeneous model a reordering of the coefficient is needed which leads inevitably to the destruction of the block diagonal structure of the appropriate NEQ-matrix and therefore also to the destruction of simple sparsity. Hence, a special coefficient ordering is needed to create some new favorable sparsity pattern leading to a later efficient computational solving method. Such pattern can be found in the so called kite-structure (Bosch, 1993), achieving when applying the kite-ordering to the stacked NEQ-matrix. In a first step it is shown what is needed to attain the kite-(NEQ)system, how to solve it efficiently and also how to calculate the appropriate variance information from it. Further, because of the massive computational workload when operating on large kite-systems (theoretically possible up to about max. d/o 100.000), the main emphasis is put on to the presentation of special distributed algorithms which may solve those systems parallel on an indeterminate number of processes and are

  9. A high-accuracy algorithm for designing arbitrary holographic atom traps.

    PubMed

    Pasienski, Matthew; Demarco, Brian

    2008-02-01

    We report the realization of a new iterative Fourier-transform algorithm for creating holograms that can diffract light into an arbitrary two-dimensional intensity profile. We show that the predicted intensity distributions are smooth with a fractional error from the target distribution at the percent level. We demonstrate that this new algorithm outperforms the most frequently used alternatives typically by one and two orders of magnitude in accuracy and roughness, respectively. The techniques described in this paper outline a path to creating arbitrary holographic atom traps in which the only remaining hurdle is physical implementation.

  10. Dynamical analysis of Grover's search algorithm in arbitrarily high-dimensional search spaces

    NASA Astrophysics Data System (ADS)

    Jin, Wenliang

    2016-01-01

    We discuss at length the dynamical behavior of Grover's search algorithm for which all the Walsh-Hadamard transformations contained in this algorithm are exposed to their respective random perturbations inducing the augmentation of the dimension of the search space. We give the concise and general mathematical formulations for approximately characterizing the maximum success probabilities of finding a unique desired state in a large unsorted database and their corresponding numbers of Grover iterations, which are applicable to the search spaces of arbitrary dimension and are used to answer a salient open problem posed by Grover (Phys Rev Lett 80:4329-4332, 1998).

  11. Implementation and testing of a sensor-netting algorithm for early warning and high confidence C/B threat detection

    NASA Astrophysics Data System (ADS)

    Gruber, Thomas; Grim, Larry; Fauth, Ryan; Tercha, Brian; Powell, Chris; Steinhardt, Kristin

    2011-05-01

    Large networks of disparate chemical/biological (C/B) sensors, MET sensors, and intelligence, surveillance, and reconnaissance (ISR) sensors reporting to various command/display locations can lead to conflicting threat information, questions of alarm confidence, and a confused situational awareness. Sensor netting algorithms (SNA) are being developed to resolve these conflicts and to report high confidence consensus threat map data products on a common operating picture (COP) display. A data fusion algorithm design was completed in a Phase I SBIR effort and development continues in the Phase II SBIR effort. The initial implementation and testing of the algorithm has produced some performance results. The algorithm accepts point and/or standoff sensor data, and event detection data (e.g., the location of an explosion) from various ISR sensors (e.g., acoustic, infrared cameras, etc.). These input data are preprocessed to assign estimated uncertainty to each incoming piece of data. The data are then sent to a weighted tomography process to obtain a consensus threat map, including estimated threat concentration level uncertainty. The threat map is then tested for consistency and the overall confidence for the map result is estimated. The map and confidence results are displayed on a COP. The benefits of a modular implementation of the algorithm and comparisons of fused / un-fused data results will be presented. The metrics for judging the sensor-netting algorithm performance are warning time, threat map accuracy (as compared to ground truth), false alarm rate, and false alarm rate v. reported threat confidence level.

  12. A new algorithm for a high-modulation frequency and high-speed digital lock-in amplifier

    NASA Astrophysics Data System (ADS)

    Jiang, G. L.; Yang, H.; Li, R.; Kong, P.

    2016-01-01

    To increase the maximum modulation frequency of the digital lock-in amplifier in an online system, we propose a new algorithm using a square wave reference whose frequency is an odd sub-multiple of the modulation frequency, which is based on odd harmonic components in the square wave reference. The sampling frequency is four times the modulation frequency to insure the orthogonality of reference sequences. Only additions and subtractions are used to implement phase-sensitive detection, which speeds up the computation in lock-in. Furthermore, the maximum modulation frequency of a lock-in is enhanced considerably. The feasibility of this new algorithm is tested by simulation and experiments.

  13. Pseudonephritis is associated with high urinary osmolality and high specific gravity in adolescent soccer players.

    PubMed

    Van Biervliet, Stephanie; Van Biervliet, Jean Pierre; Watteyne, Karel; Langlois, Michel; Bernard, Dirk; Vande Walle, Johan

    2013-08-01

    The study aimed to evaluate the effect of exercise on urine sediment in adolescent soccer players. In 25 15-year-old (range 14.4-15.8 yrs) athletes, urinary protein, osmolality and cytology were analyzed by flow cytometry and automated dipstick analysis before (T(0)), during (T(1)), and after a match (T(2)). All athletes had normal urine analysis and blood pressure at rest, tested before the start of the soccer season. Fifty-eight samples were collected (T(0): 20, T(1): 17, T(2): 21). Proteinuria was present in 20 of 38 samples collected after exercise. Proteinuria was associated with increased urinary osmolality (p < .001) and specific gravity (p < .001). Hyaline and granular casts were present in respectively 8 of 38 and 8 of 38 of the urinary samples after exercise. The presence of casts was associated with urine protein concentration, osmolality, and specific gravity. This was also the case for hematuria (25 of 38) and leucocyturia (9 of 38). Squamous epithelial cells were excreted in equal amounts to white and red blood cells. A notable proportion of adolescent athletes developed sediment abnormalities, which were associated with urinary osmolality and specific gravity.

  14. WExplore: hierarchical exploration of high-dimensional spaces using the weighted ensemble algorithm.

    PubMed

    Dickson, Alex; Brooks, Charles L

    2014-04-01

    As most relevant motions in biomolecular systems are inaccessible to conventional molecular dynamics simulations, algorithms that enhance sampling of rare events are indispensable. Increasing interest in intrinsically disordered systems and the desire to target ensembles of protein conformations (rather than single structures) in drug development motivate the need for enhanced sampling algorithms that are not limited to "two-basin" problems, and can efficiently determine structural ensembles. For systems that are not well-studied, this must often be done with little or no information about the dynamics of interest. Here we present a novel strategy to determine structural ensembles that uses dynamically defined sampling regions that are organized in a hierarchical framework. It is based on the weighted ensemble algorithm, where an ensemble of copies of the system ("replicas") is directed to new regions of configuration space through merging and cloning operations. The sampling hierarchy allows for a large number of regions to be defined, while using only a small number of replicas that can be balanced over multiple length scales. We demonstrate this algorithm on two model systems that are analytically solvable and examine the 10-residue peptide chignolin in explicit solvent. The latter system is analyzed using a configuration space network, and novel hydrogen bonds are found that facilitate folding.

  15. Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.

    2015-07-01

    The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.

  16. An Evaluation of SEBAL Algorithm Using High Resolution Aircraft Data Acquired During BEAREX07

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Surface Energy Balance Algorithm for Land (SEBAL) computes spatially distributed surface energy fluxes and evapotranspiration (ET) rates using a combination of empirical and deterministic equations executed in a strictly hierarchical sequence. Over the past decade, SEBAL has been tested over various...

  17. On the role of marginal confounder prevalence – implications for the high-dimensional propensity score algorithm

    PubMed Central

    Schuster, Tibor; Pang, Menglan; Platt, Robert W

    2016-01-01

    PURPOSE The high-dimensional propensity score algorithm attempts to improve control of confounding in typical treatment effect studies in pharmacoepidemiology and is increasingly being used for the analysis of large administrative databases. Within this multi-step variable selection algorithm, the marginal prevalence of non-zero covariate values is considered to be an indicator for a count variable's potential confounding impact. We investigate the role of the marginal prevalence of confounder variables on potentially caused bias magnitudes when estimating risk ratios in point exposure studies with binary outcomes. METHODS We apply the law of total probability in conjunction with an established bias formula to derive and illustrate relative bias boundaries with respect to marginal confounder prevalence. RESULTS We show that maximum possible bias magnitudes can occur at any marginal prevalence level of a binary confounder variable. In particular, we demonstrate that, in case of rare or very common exposures, low and high prevalent confounder variables can still have large confounding impact on estimated risk ratios. CONCLUSIONS Covariate pre-selection by prevalence may lead to sub-optimal confounder sampling within the high-dimensional propensity score algorithm. While we believe that the high-dimensional propensity score has important benefits in large-scale pharmacoepidemiologic studies, we recommend omitting the prevalence-based empirical identification of candidate covariates. PMID:25866189

  18. Highly Specific, Bi-substrate-Competitive Src Inhibitors from DNA-Templated Macrocycles

    PubMed Central

    Georghiou, George; Kleiner, Ralph E.; Pulkoski-Gross, Michael

    2011-01-01

    Protein kinases are attractive therapeutic targets, but their high sequence and structural conservation complicates the development of specific inhibitors. We recently discovered from a DNA-templated macrocycle library inhibitors with unusually high selectivity among Src-family kinases. Starting from these compounds, we developed and characterized in molecular detail potent macrocyclic inhibitors of Src kinase and its cancer-associated gatekeeper mutant. We solved two co-crystal structures of macrocycles bound to Src kinase. These structures reveal the molecular basis of the combined ATP- and substrate peptide-competitive inhibitory mechanism and the remarkable kinase specificity of the compounds. The most potent compounds inhibit Src activity in cultured mammalian cells. Our work establishes that macrocycles can inhibit protein kinases through a bi-substrate competitive mechanism with high potency and exceptional specificity, reveals the precise molecular basis for their desirable properties, and provides new insights into the development of Src-specific inhibitors with potential therapeutic relevance. PMID:22344177

  19. Highly specific, bisubstrate-competitive Src inhibitors from DNA-templated macrocycles.

    PubMed

    Georghiou, George; Kleiner, Ralph E; Pulkoski-Gross, Michael; Liu, David R; Seeliger, Markus A

    2012-02-19

    Protein kinases are attractive therapeutic targets, but their high sequence and structural conservation complicates the development of specific inhibitors. We recently identified, in a DNA-templated macrocycle library, inhibitors with unusually high selectivity among Src-family kinases. Starting from these compounds, we developed and characterized in molecular detail potent macrocyclic inhibitors of Src kinase and its cancer-associated 'gatekeeper' mutant. We solved two cocrystal structures of macrocycles bound to Src kinase. These structures reveal the molecular basis of the combined ATP- and substrate peptide-competitive inhibitory mechanism and the remarkable kinase specificity of the compounds. The most potent compounds inhibit Src activity in cultured mammalian cells. Our work establishes that macrocycles can inhibit protein kinases through a bisubstrate-competitive mechanism with high potency and exceptional specificity, reveals the precise molecular basis for their desirable properties and provides new insights into the development of Src-specific inhibitors with potential therapeutic relevance.

  20. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net .

  1. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  2. Advanced algorithms and high-performance testbed for large-scale site characterization and subsurface target detection using airborne ground-penetrating SAR

    NASA Astrophysics Data System (ADS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1999-08-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, JPL, Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field, in Colorado, by using SRI airborne, ground penetrating, SAR. The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance testbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and maximum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accuracy UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data. In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma Proving Ground, AZ, acquired by SRI SAR.

  3. Vision Algorithm for the Solar Aspect System of the High Energy Replicated Optics to Explore the Sun Mission

    NASA Technical Reports Server (NTRS)

    Cramer, Alexander Krishnan

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high- accuracy pitch and yaw pointing solutions relative to the sun on a high altitude balloon. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small cross-shaped fiducial markers. Images of this plate taken with an off-the-shelf camera were processed to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, and identification with an ad-hoc method based on the spacing between fiducials. Performance is verified on real test data where possible, but otherwise uses artificially generated data. Pointing knowledge is ultimately verified to meet the 20 arcsecond requirement.

  4. Applicability of data mining algorithms in the identification of beach features/patterns on high-resolution satellite data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.

    2015-01-01

    The available beach classification algorithms and sediment budget models are mainly based on in situ parameters, usually unavailable for several coastal areas. A morphological analysis using remotely sensed data is a valid alternative. This study focuses on the application of data mining techniques, particularly decision trees (DTs) and artificial neural networks (ANNs) to an IKONOS-2 image in order to identify beach features/patterns in a stretch of the northwest coast of Portugal. Based on knowledge of the coastal features, five classes were defined. In the identification of beach features/patterns, the ANN algorithm presented an overall accuracy of 98.6% and a kappa coefficient of 0.97. The best DTs algorithm (with pruning) presents an overall accuracy of 98.2% and a kappa coefficient of 0.97. The results obtained through the ANN and DTs were in agreement. However, the ANN presented a classification more sensitive to rip currents. The use of ANNs and DTs for beach classification from remotely sensed data resulted in an increased classification accuracy when compared with traditional classification methods. The association of remotely sensed high-spatial resolution data and data mining algorithms is an effective methodology with which to identify beach features/patterns.

  5. Tuneable ultra high specific surface area Mg/Al-CO3 layered double hydroxides.

    PubMed

    Chen, Chunping; Wangriya, Aunchana; Buffet, Jean-Charles; O'Hare, Dermot

    2015-10-01

    We report the synthesis of tuneable ultra high specific surface area Aqueous Miscible Organic solvent-Layered Double Hydroxides (AMO-LDHs). We have investigated the effects of different solvent dispersion volumes, dispersion times and the number of re-dispersion cycles specific surface area of AMO-LDHs. In particular, the effects of acetone dispersion on two different morphology AMO-LDHs (Mg3Al-CO3 AMO-LDH flowers and Mg3Al-CO3 AMO-LDH plates) was investigated. It was found that the amount of acetone used in the dispersion step process can significantly affect the specific surface area of Mg3Al-CO3 AMO-LDH flowers while the dispersion time in acetone is critical factor to obtain high specific surface area Mg3Al-CO3 AMO-LDH plates. Optimisation of the acetone washing steps enables Mg3Al-CO3 AMO-LDH to have high specific surface area up to 365 m(2) g(-1) for LDH flowers and 263 m(2) g(-1) for LDH plates. In addition, spray drying was found to be an effective and practical drying method to increase the specific surface area by a factor of 1.75. Our findings now form the basis of an effective general strategy to obtain ultrahigh specific surface area LDHs.

  6. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO) is a NASA spacecraft designed to study the Sun. It was launched on February 11, 2010 into a geosynchronous orbit, and uses a suite of attitude sensors and actuators to finely point the spacecraft at the Sun. SDO has three science instruments: the Atmospheric Imaging Assembly (AIA), the Helioseismic and Magnetic Imager (HMI), and the Extreme Ultraviolet Variability Experiment (EVE). SDO uses two High Gain Antennas (HGAs) to send science data to a dedicated ground station in White Sands, New Mexico. In order to meet the science data capture budget, the HGAs must be able to transmit data to the ground for a very large percentage of the time. Each HGA is a dual-axis antenna driven by stepper motors. Both antennas transmit data at all times, but only a single antenna is required in order to meet the transmission rate requirement. For portions of the year, one antenna or the other has an unobstructed view of the White Sands ground station. During other periods, however, the view from both antennas to the Earth is blocked for different portions of the day. During these times of blockage, the two HGAs take turns pointing to White Sands, with the other antenna pointing out to space. The HGAs handover White Sands transmission responsibilities to the unblocked antenna. There are two handover seasons per year, each lasting about 72 days, where the antennas hand off control every twelve hours. The non-tracking antenna slews back to the ground station by following a ground commanded trajectory and arrives approximately 5 minutes before the formerly tracking antenna slews away to point out into space. The SDO Attitude Control System (ACS) runs at 5 Hz, and the HGA Gimbal Control Electronics (GCE) run at 200 Hz. There are 40 opportunities for the gimbals to step each ACS cycle, with a hardware limitation of no more than one step every three GCE cycles. The ACS calculates the desired gimbal motion for tracking the ground station or for slewing

  7. A low-jitter and high-throughput scheduling based on genetic algorithm in slotted WDM networks

    NASA Astrophysics Data System (ADS)

    Zhang, Jingjing; Jin, Yaohui; Su, Yikai; Xu, Buwei; Zhang, Chunlei; Zhu, Yi; Hu, Weisheng

    2005-02-01

    Slotted WDM, which achieves higher capacity compared with conventional WDM and SDH networks, has been discussed a lot recently. The ring network for this architecture has been demonstrated experimentally. In slotted WDM ring network, each node is equipped with a wavelength-tunable transmitter and a fixed receiver and assigned with a specific wavelength. A node can send data to every other node by tuning wavelength accordingly in a time slot. One of the important issues for it is scheduling. Scheduling of it can be reduced to input queued switch when synchronization and propagation are solved and many schemes have been proposed to solve these two issues. However, it"s proved that scheduling of such a network taking both jitter and throughput into consideration is NP hard. Greedy algorithm has been proposed to solve it before. The main contribution of this paper lies in a novel genetic algorithm to obtain optimal or near optimal value of this specific NP hard problem. We devise problem specific chromosome codes, fitness function, crossover and mutation operations. Experimental results show that our GA provides better performances in terms of throughput and jitter than a greedy heuristic.

  8. Electrolytes with Improved Safety Characteristics for High Voltage, High Specific Energy Li-ion Cells

    NASA Technical Reports Server (NTRS)

    Smart, M. C.; Krause, F. C.; Hwang, C.; West, W. C.; Soler, J.; Whitcanack, L. W.; Prakash, G. K. S.; Ratnakumar, B. V.

    2012-01-01

    (1) NASA is actively pursuing the development of advanced electrochemical energy storage and conversion devices for future lunar and Mars missions; (2) The Exploration Technology Development Program, Energy Storage Project is sponsoring the development of advanced Li-ion batteries and PEM fuel cell and regenerative fuel cell systems for the Altair Lunar Lander, Extravehicular Activities (EVA), and rovers and as the primary energy storage system for Lunar Surface Systems; (3) At JPL, in collaboration with NASA-GRC, NASA-JSC and industry, we are actively developing advanced Li-ion batteries with improved specific energy, energy density and safety. One effort is focused upon developing Li-ion battery electrolyte with enhanced safety characteristics (i.e., low flammability); and (4) A number of commercial applications also require Li-ion batteries with enhanced safety, especially for automotive applications.

  9. New Design Methods And Algorithms For High Energy-Efficient And Low-cost Distillation Processes

    SciTech Connect

    Agrawal, Rakesh

    2013-11-21

    This project sought and successfully answered two big challenges facing the creation of low-energy, cost-effective, zeotropic multi-component distillation processes: first, identification of an efficient search space that includes all the useful distillation configurations and no undesired configurations; second, development of an algorithm to search the space efficiently and generate an array of low-energy options for industrial multi-component mixtures. Such mixtures are found in large-scale chemical and petroleum plants. Commercialization of our results was addressed by building a user interface allowing practical application of our methods for industrial problems by anyone with basic knowledge of distillation for a given problem. We also provided our algorithm to a major U.S. Chemical Company for use by the practitioners. The successful execution of this program has provided methods and algorithms at the disposal of process engineers to readily generate low-energy solutions for a large class of multicomponent distillation problems in a typical chemical and petrochemical plant. In a petrochemical complex, the distillation trains within crude oil processing, hydrotreating units containing alkylation, isomerization, reformer, LPG (liquefied petroleum gas) and NGL (natural gas liquids) processing units can benefit from our results. Effluents from naphtha crackers and ethane-propane crackers typically contain mixtures of methane, ethylene, ethane, propylene, propane, butane and heavier hydrocarbons. We have shown that our systematic search method with a more complete search space, along with the optimization algorithm, has a potential to yield low-energy distillation configurations for all such applications with energy savings up to 50%.

  10. Establishment of an Algorithm Using prM/E- and NS1-Specific IgM Antibody-Capture Enzyme-Linked Immunosorbent Assays in Diagnosis of Japanese Encephalitis Virus and West Nile Virus Infections in Humans

    PubMed Central

    Galula, Jedhan U.; Chang, Gwong-Jen J.

    2015-01-01

    The front-line assay for the presumptive serodiagnosis of acute Japanese encephalitis virus (JEV) and West Nile virus (WNV) infections is the premembrane/envelope (prM/E)-specific IgM antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Due to antibody cross-reactivity, MAC-ELISA-positive samples may be confirmed with a time-consuming plaque reduction neutralization test (PRNT). In the present study, we applied a previously developed anti-nonstructural protein 1 (NS1)-specific MAC-ELISA (NS1-MAC-ELISA) on archived acute-phase serum specimens from patients with confirmed JEV and WNV infections and compared the results with prM/E containing virus-like particle-specific MAC-ELISA (VLP-MAC-ELISA). Paired-receiver operating characteristic (ROC) curve analyses revealed no statistical differences in the overall assay performances of the VLP- and NS1-MAC-ELISAs. The two methods had high sensitivities of 100% but slightly lower specificities that ranged between 80% and 100%. When the NS1-MAC-ELISA was used to confirm positive results in the VLP-MAC-ELISA, the specificity of serodiagnosis, especially for JEV infection, was increased to 90% when applied in areas where JEV cocirculates with WNV, or to 100% when applied in areas that were endemic for JEV. The results also showed that using multiple antigens could resolve the cross-reactivity in the assays. Significantly higher positive-to-negative (P/N) values were consistently obtained with the homologous antigens than those with the heterologous antigens. JEV or WNV was reliably identified as the currently infecting flavivirus by a higher ratio of JEV-to-WNV P/N values or vice versa. In summary of the above-described results, the diagnostic algorithm combining the use of multiantigen VLP- and NS1-MAC-ELISAs was developed and can be practically applied to obtain a more specific and reliable result for the serodiagnosis of JEV and WNV infections without the need for PRNT. The developed algorithm should provide great

  11. [The High Precision Analysis Research of Multichannel BOTDR Scattering Spectral Information Based on the TTDF and CNS Algorithm].

    PubMed

    Zhang, Yan-jun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong

    2015-07-01

    Traditional BOTDR optical fiber sensing system uses single channel sensing fiber to measure the information features. Uncontrolled factors such as cross-sensitivity can lead to a lower scattering spectrum fitting precision and make the information analysis deflection get worse. Therefore, a BOTDR system for detecting the multichannel sensor information at the same time is proposed. Also it provides a scattering spectrum analysis method for multichannel Brillouin optical time-domain reflection (BOT-DR) sensing system in order to extract high precision spectrum feature. This method combines the three times data fusion (TTDF) and the cuckoo Newton search (CNS) algorithm. First, according to the rule of Dixon and Grubbs criteria, the method uses the ability of TTDF algorithm in data fusion to eliminate the influence of abnormal value and reduce the error signal. Second, it uses the Cuckoo Newton search algorithm to improve the spectrum fitting and enhance the accuracy of Brillouin scattering spectrum information analysis. We can obtain the global optimal solution by smart cuckoo search. By using the optimal solution as the initial value of Newton algorithm for local optimization, it can ensure the spectrum fitting precision. The information extraction at different linewidths is analyzed in temperature information scattering spectrum under the condition of linear weight ratio of 1:9. The variances of the multichannel data fusion is about 0.0030, the center frequency of scattering spectrum is 11.213 GHz and the temperature error is less than 0.15 K. Theoretical analysis and simulation results show that the algorithm can be used in multichannel distributed optical fiber sensing system based on Brillouin optical time domain reflection. It can improve the accuracy of multichannel sensing signals and the precision of Brillouin scattering spectrum analysis effectively. PMID:26717729

  12. An autonomous navigation algorithm for high orbit satellite using star sensor and ultraviolet earth sensor.

    PubMed

    Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu

    2013-01-01

    An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust.

  13. Prospective validation of a 1-hour algorithm to rule-out and rule-in acute myocardial infarction using a high-sensitivity cardiac troponin T assay

    PubMed Central

    Reichlin, Tobias; Twerenbold, Raphael; Wildi, Karin; Gimenez, Maria Rubini; Bergsma, Nathalie; Haaf, Philip; Druey, Sophie; Puelacher, Christian; Moehring, Berit; Freese, Michael; Stelzig, Claudia; Krivoshei, Lian; Hillinger, Petra; Jäger, Cedric; Herrmann, Thomas; Kreutzinger, Philip; Radosavac, Milos; Weidmann, Zoraida Moreno; Pershyna, Kateryna; Honegger, Ursina; Wagener, Max; Vuillomenet, Thierry; Campodarve, Isabel; Bingisser, Roland; Miró, Òscar; Rentsch, Katharina; Bassetti, Stefano; Osswald, Stefan; Mueller, Christian

    2015-01-01

    Background: We aimed to prospectively validate a novel 1-hour algorithm using high-sensitivity cardiac troponin T measurement for early rule-out and rule-in of acute myocardial infarction (MI). Methods: In a multicentre study, we enrolled 1320 patients presenting to the emergency department with suspected acute MI. The high-sensitivity cardiac troponin T 1-hour algorithm, incorporating baseline values as well as absolute changes within the first hour, was validated against the final diagnosis. The final diagnosis was then adjudicated by 2 independent cardiologists using all available information, including coronary angiography, echocardiography, follow-up data and serial measurements of high-sensitivity cardiac troponin T levels. Results: Acute MI was the final diagnosis in 17.3% of patients. With application of the high-sensitivity cardiac troponin T 1-hour algorithm, 786 (59.5%) patients were classified as “rule-out,” 216 (16.4%) were classified as “rule-in” and 318 (24.1%) were classified to the “observational zone.” The sensitivity and the negative predictive value for acute MI in the rule-out zone were 99.6% (95% confidence interval [CI] 97.6%–99.9%) and 99.9% (95% CI 99.3%–100%), respectively. The specificity and the positive predictive value for acute MI in the rule-in zone were 95.7% (95% CI 94.3%–96.8%) and 78.2% (95% CI 72.1%–83.6%), respectively. The 1-hour algorithm provided higher negative and positive predictive values than the standard interpretation of highsensitivity cardiac troponin T using a single cut-off level (both p < 0.05). Cumulative 30-day mortality was 0.0%, 1.6% and 1.9% in patients classified in the rule-out, observational and rule-in groups, respectively (p = 0.001). Interpretation: This rapid strategy incorporating high-sensitivity cardiac troponin T baseline values and absolute changes within the first hour substantially accelerated the management of suspected acute MI by allowing safe rule-out as well as accurate

  14. Modified Omega-k Algorithm for High-Speed Platform Highly-Squint Staggered SAR Based on Azimuth Non-Uniform Interpolation

    PubMed Central

    Zeng, Hong-Cheng; Chen, Jie; Liu, Wei; Yang, Wei

    2015-01-01

    In this work, the staggered SAR technique is employed for high-speed platform highly-squint SAR by varying the pulse repetition interval (PRI) as a linear function of range-walk. To focus the staggered SAR data more efficiently, a low-complexity modified Omega-k algorithm is proposed based on a novel method for optimal azimuth non-uniform interpolation, avoiding zero padding in range direction for recovering range cell migration (RCM) and saving in both data storage and computational load. An approximate model on continuous PRI variation with respect to sliding receive-window is employed in the proposed algorithm, leaving a residual phase error only due to the effect of a time-varying Doppler phase caused by staggered SAR. Then, azimuth non-uniform interpolation (ANI) at baseband is carried out to compensate the azimuth non-uniform sampling (ANS) effect resulting from continuous PRI variation, which is further followed by the modified Omega-k algorithm. The proposed algorithm has a significantly lower computational complexity, but with an equally effective imaging performance, as shown in our simulation results. PMID:25664433

  15. A simple greedy algorithm for reconstructing pedigrees.

    PubMed

    Cowell, Robert G

    2013-02-01

    This paper introduces a simple greedy algorithm for searching for high likelihood pedigrees using micro-satellite (STR) genotype information on a complete sample of related individuals. The core idea behind the algorithm is not new, but it is believed that putting it into a greedy search setting, and specifically the application to pedigree learning, is novel. The algorithm does not require age or sex information, but this information can be incorporated if desired. The algorithm is applied to human and non-human genetic data and in a simulation study. PMID:23164633

  16. Overall plant design specification Modular High Temperature Gas-cooled Reactor. Revision 9

    SciTech Connect

    1990-05-01

    Revision 9 of the ``Overall Plant Design Specification Modular High Temperature Gas-Cooled Reactor,`` DOE-HTGR-86004 (OPDS) has been completed and is hereby distributed for use by the HTGR Program team members. This document, Revision 9 of the ``Overall Plant Design Specification`` (OPDS) reflects those changes in the MHTGR design requirements and configuration resulting form approved Design Change Proposals DCP BNI-003 and DCP BNI-004, involving the Nuclear Island Cooling and Spent Fuel Cooling Systems respectively.

  17. Wide Operating Temperature Range Electrolytes for High Voltage and High Specific Energy Li-Ion Cells

    NASA Technical Reports Server (NTRS)

    Smart, M. C.; Hwang, C.; Krause, F. C.; Soler, J.; West, W. C.; Ratnakumar, B. V.; Amine, K.

    2012-01-01

    A number of electrolyte formulations that have been designed to operate over a wide temperature range have been investigated in conjunction with layered-layered metal oxide cathode materials developed at Argonne. In this study, we have evaluated a number of electrolytes in Li-ion cells consisting of Conoco Phillips A12 graphite anodes and Toda HE5050 Li(1.2)Ni(0.15)Co(0.10)Mn(0.55)O2 cathodes. The electrolytes studied consisted of LiPF6 in carbonate-based electrolytes that contain ester co-solvents with various solid electrolyte interphase (SEI) promoting additives, many of which have been demonstrated to perform well in 4V systems. More specifically, we have investigated the performance of a number of methyl butyrate (MB) containing electrolytes (i.e., LiPF6 in ethylene carbonate (EC) + ethyl methyl carbonate (EMC) + MB (20:20:60 v/v %) that contain various additives, including vinylene carbonate, lithium oxalate, and lithium bis(oxalato)borate (LiBOB). When these systems were evaluated at various rates at low temperatures, the methyl butyrate-based electrolytes resulted in improved rate capability compared to cells with all carbonate-based formulations. It was also ascertained that the slow cathode kinetics govern the generally poor rate capability at low temperature in contrast to traditionally used LiNi(0.80)Co(0.15)Al(0.05)O2-based systems, rather than being influenced strongly by the electrolyte type.

  18. AxonQuant: A Microfluidic Chamber Culture-Coupled Algorithm That Allows High-Throughput Quantification of Axonal Damage

    PubMed Central

    Li, Yang; Yang, Mengxue; Huang, Zhuo; Chen, Xiaoping; Maloney, Michael T.; Zhu, Li; Liu, Jianghong; Yang, Yanmin; Du, Sidan; Jiang, Xingyu; Wu, Jane Y.

    2014-01-01

    Published methods for imaging and quantitatively analyzing morphological changes in neuronal axons have serious limitations because of their small sample sizes, and their time-consuming and nonobjective nature. Here we present an improved microfluidic chamber design suitable for fast and high-throughput imaging of neuronal axons. We developed the Axon-Quant algorithm, which is suitable for automatic processing of axonal imaging data. This microfluidic chamber-coupled algorithm allows calculation of an ‘axonal continuity index’ that quantitatively measures axonal health status in a manner independent of neuronal or axonal density. This method allows quantitative analysis of axonal morphology in an automatic and nonbiased manner. Our method will facilitate large-scale high-throughput screening for genes or therapeutic compounds for neurodegenerative diseases involving axonal damage. When combined with imaging technologies utilizing different gene markers, this method will provide new insights into the mechanistic basis for axon degeneration. Our microfluidic chamber culture-coupled AxonQuant algorithm will be widely useful for studying axonal biology and neurodegenerative disorders. PMID:24603552

  19. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  20. Hybrid-PIC Modeling of a High-Voltage, High-Specific-Impulse Hall Thruster

    NASA Technical Reports Server (NTRS)

    Smith, Brandon D.; Boyd, Iain D.; Kamhawi, Hani; Huang, Wensheng

    2013-01-01

    The primary life-limiting mechanism of Hall thrusters is the sputter erosion of the discharge channel walls by high-energy propellant ions. Because of the difficulty involved in characterizing this erosion experimentally, many past efforts have focused on numerical modeling to predict erosion rates and thruster lifespan, but those analyses were limited to Hall thrusters operating in the 200-400V discharge voltage range. Thrusters operating at higher discharge voltages (V(sub d) >= 500 V) present an erosion environment that may differ greatly from that of the lower-voltage thrusters modeled in the past. In this work, HPHall, a well-established hybrid-PIC code, is used to simulate NASA's High-Voltage Hall Accelerator (HiVHAc) at discharge voltages of 300, 400, and 500V as a first step towards modeling the discharge channel erosion. It is found that the model accurately predicts the thruster performance at all operating conditions to within 6%. The model predicts a normalized plasma potential profile that is consistent between all three operating points, with the acceleration zone appearing in the same approximate location. The expected trend of increasing electron temperature with increasing discharge voltage is observed. An analysis of the discharge current oscillations shows that the model predicts oscillations that are much greater in amplitude than those measured experimentally at all operating points, suggesting that the differences in oscillation amplitude are not strongly associated with discharge voltage.

  1. The application of Quadtree algorithm to information integration for geological disposal of high-level radioactive waste

    NASA Astrophysics Data System (ADS)

    Gao, Min; Huang, Shutao; Zhong, Xia

    2010-11-01

    The establishment of multi-source database was designed to promote the informatics process of the geological disposal of High-level Radioactive Waste, the integration of multi-dimensional and multi-source information and its application are related to computer software and hardware. Based on the analysis of data resources in Beishan area, Gansu Province, and combined with GIS technologies and methods. This paper discusses the technical ideas of how to manage, fully share and rapidly retrieval the information resources in this area by using open source code GDAL and Quadtree algorithm, especially in terms of the characteristics of existing data resources, spatial data retrieval algorithm theory, programming design and implementation of the ideas.

  2. The application of Quadtree algorithm to information integration for geological disposal of high-level radioactive waste

    NASA Astrophysics Data System (ADS)

    Gao, Min; Huang, Shutao; Zhong, Xia

    2009-09-01

    The establishment of multi-source database was designed to promote the informatics process of the geological disposal of High-level Radioactive Waste, the integration of multi-dimensional and multi-source information and its application are related to computer software and hardware. Based on the analysis of data resources in Beishan area, Gansu Province, and combined with GIS technologies and methods. This paper discusses the technical ideas of how to manage, fully share and rapidly retrieval the information resources in this area by using open source code GDAL and Quadtree algorithm, especially in terms of the characteristics of existing data resources, spatial data retrieval algorithm theory, programming design and implementation of the ideas.

  3. Comparative Analysis of CNV Calling Algorithms: Literature Survey and a Case Study Using Bovine High-Density SNP Data

    PubMed Central

    Xu, Lingyang; Hou, Yali; Bickhart, Derek M.; Song, Jiuzhou; Liu, George E.

    2013-01-01

    Copy number variations (CNVs) are gains and losses of genomic sequence between two individuals of a species when compared to a reference genome. The data from single nucleotide polymorphism (SNP) microarrays are now routinely used for genotyping, but they also can be utilized for copy number detection. Substantial progress has been made in array design and CNV calling algorithms and at least 10 comparison studies in humans have been published to assess them. In this review, we first survey the literature on existing microarray platforms and CNV calling algorithms. We then examine a number of CNV calling tools to evaluate their impacts using bovine high-density SNP data. Large incongruities in the results from different CNV calling tools highlight the need for standardizing array data collection, quality assessment and experimental validation. Only after careful experimental design and rigorous data filtering can the impacts of CNVs on both normal phenotypic variability and disease susceptibility be fully revealed.

  4. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  5. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  6. A New Chest Compression Depth Feedback Algorithm for High-Quality CPR Based on Smartphone

    PubMed Central

    Song, Yeongtak; Oh, Jaehoon

    2015-01-01

    Abstract Background Although many smartphone application (app) programs provide education and guidance for basic life support, they do not commonly provide feedback on the chest compression depth (CCD) and rate. The validation of its accuracy has not been reported to date. This study was a feasibility assessment of use of the smartphone as a CCD feedback device. In this study, we proposed the concept of a new real-time CCD estimation algorithm using a smartphone and evaluated the accuracy of the algorithm. Materials and Methods Using the double integration of the acceleration signal, which was obtained from the accelerometer in the smartphone, we estimated the CCD in real time. Based on its periodicity, we removed the bias error from the accelerometer. To evaluate this instrument's accuracy, we used a potentiometer as the reference depth measurement. The evaluation experiments included three levels of CCD (insufficient, adequate, and excessive) and four types of grasping orientations with various compression directions. We used the difference between the reference measurement and the estimated depth as the error. The error was calculated for each compression. Results When chest compressions were performed with adequate depth for the patient who was lying on a flat floor, the mean (standard deviation) of the errors was 1.43 (1.00) mm. When the patient was lying on an oblique floor, the mean (standard deviation) of the errors was 3.13 (1.88) mm. Conclusions The error of the CCD estimation was tolerable for the algorithm to be used in the smartphone-based CCD feedback app to compress more than 51 mm, which is the 2010 American Heart Association guideline. PMID:25402865

  7. High-throughput time-stretch microscopy with morphological and chemical specificity

    NASA Astrophysics Data System (ADS)

    Lei, Cheng; Ugawa, Masashi; Nozawa, Taisuke; Ideguchi, Takuro; Di Carlo, Dino; Ota, Sadao; Ozeki, Yasuyuki; Goda, Keisuke

    2016-03-01

    Particle analysis is an effective method in analytical chemistry for sizing and counting microparticles such as emulsions, colloids, and biological cells. However, conventional methods for particle analysis, which fall into two extreme categories, have severe limitations. Sieving and Coulter counting are capable of analyzing particles with high throughput, but due to their lack of detailed information such as morphological and chemical characteristics, they can only provide statistical results with low specificity. On the other hand, CCD or CMOS image sensors can be used to analyze individual microparticles with high content, but due to their slow charge download, the frame rate (hence, the throughput) is significantly limited. Here by integrating a time-stretch optical microscope with a three-color fluorescent analyzer on top of an inertial-focusing microfluidic device, we demonstrate an optofluidic particle analyzer with a sub-micrometer spatial resolution down to 780 nm and a high throughput of 10,000 particles/s. In addition to its morphological specificity, the particle analyzer provides chemical specificity to identify chemical expressions of particles via fluorescence detection. Our results indicate that we can identify different species of microparticles with high specificity without sacrificing throughput. Our method holds promise for high-precision statistical particle analysis in chemical industry and pharmaceutics.

  8. An oscillograms processing algorithm of a high power transformer on the basis of experimental data

    NASA Astrophysics Data System (ADS)

    Vasileva, O. V.; Budko, A. A.; Lavrinovich, A. V.

    2016-04-01

    The paper presents the studies on digital processing of oscillograms of the power transformer operation allowing determining the state of its windings of different types and degrees of damage. The study was carried out according to the authors' own methods using the Fourier analysis and the developed program based on the following application software packages: MathCAD and Lab View. The efficiency of the algorithm was demonstrated by the example of the waveform non-defective and defective transformers on the basis of the method of nanosecond pulses.

  9. Highly improved specificity for hybridization-based microRNA detection by controlled surface dissociation.

    PubMed

    Yoon, Hye Ryeon; Lee, Jeong Min; Jung, Juyeon; Lee, Chang-Soo; Chung, Bong Hyun; Jung, Yongwon

    2014-01-01

    Poor specificity has been a lingering problem in many microRNA profiling methods, particularly surface hybridization-based methods such as microarrays. Here, we carefully investigated surface hybridization and dissociation processes of a number of sequentially similar microRNAs against nucleic acid capture probes. Single-base mismatched microRNAs were similarly hybridized to a complementary DNA capture probe and thereby poorly discriminated during conventional stringent hybridization. Interestingly, however, mismatched microRNAs showed significantly faster dissociation from the probe than the perfectly matched microRNA. Systematic analysis of various washing conditions clearly demonstrated that extremely high specificity can be obtained by releasing non-specific microRNAs from assay surfaces during a stringent and controlled dissociation step. For instance, compared with stringent hybridization, surface dissociation control provided up to 6-fold better specificity for Let-7a detection than for other Let-7 family microRNAs. In addition, a synthetically introduced single-base mismatch on miR206 was almost completely discriminated by optimized surface dissociation of captured microRNAs, while this mismatch was barely distinguished from target miR206 during stringent hybridization. Furthermore, a single dissociation condition was successfully used to simultaneously measure four different microRNAs with extremely high specificity using melting temperature-equalized capture probes. The present study on selective dissociation of surface bound microRNAs can be easily applied to various hybridization based detection methods for improved specificity.

  10. Novel method for the high-throughput production of phosphorylation site-specific monoclonal antibodies

    PubMed Central

    Kurosawa, Nobuyuki; Wakata, Yuka; Inobe, Tomonao; Kitamura, Haruki; Yoshioka, Megumi; Matsuzawa, Shun; Kishi, Yoshihiro; Isobe, Masaharu

    2016-01-01

    Threonine phosphorylation accounts for 10% of all phosphorylation sites compared with 0.05% for tyrosine and 90% for serine. Although monoclonal antibody generation for phospho-serine and -tyrosine proteins is progressing, there has been limited success regarding the production of monoclonal antibodies against phospho-threonine proteins. We developed a novel strategy for generating phosphorylation site-specific monoclonal antibodies by cloning immunoglobulin genes from single plasma cells that were fixed, intracellularly stained with fluorescently labeled peptides and sorted without causing RNA degradation. Our high-throughput fluorescence activated cell sorting-based strategy, which targets abundant intracellular immunoglobulin as a tag for fluorescently labeled antigens, greatly increases the sensitivity and specificity of antigen-specific plasma cell isolation, enabling the high-efficiency production of monoclonal antibodies with desired antigen specificity. This approach yielded yet-undescribed guinea pig monoclonal antibodies against threonine 18-phosphorylated p53 and threonine 68-phosphorylated CHK2 with high affinity and specificity. Our method has the potential to allow the generation of monoclonal antibodies against a variety of phosphorylated proteins. PMID:27125496

  11. Adaptive Algorithm for Soil Mosture Retrieval in Agricultural and Mountainous Areas with High Resolution ASAR Images

    NASA Astrophysics Data System (ADS)

    Notarnicola, C.; Paloscia, S.; Pettinato, S.; Preziosa, G.; Santi, E.; Ventura, B.

    2010-12-01

    In this paper, extensive data sets of SAR images and related ground truth on three areas characterized by very different surface features have been analyzed in order to understand the ENVISAT/ASAR responses to different soil, environmental and seasonal conditions. The comparison of the backscattering coefficients in dependence of soil moisture values for all the analyzed datasets indicates the same sensitivity to soil moisture variations but with different biases, which may depend on soil characteristics, vegetation presence and roughness effect. A further comparison with historical data collected on bare soils with comparable roughness at the same frequency, polarization and incidence angle, confirmed that the different surface features affect the bias of the relationship, while the backscattering sensitivity to the SMC remains quite constant. These different biases values have been used to determine an adaptive term to be added in the electromagnetic formulation of the backscattering responses from natural surfaces, obtained by using the Integral Equation Model (IEM). The simulated data from this model have been then used to train a neural network as inversion algorithm. The paper will present the results from this new technique in comparison to neural network and Bayesian algorithms trained on one area and then tested on the other ones.

  12. The Evaluation of a Rapid In Situ HIV Confirmation Test in a Programme with a High Failure Rate of the WHO HIV Two-Test Diagnostic Algorithm

    PubMed Central

    Klarkowski, Derryck B.; Wazome, Joseph M.; Lokuge, Kamalini M.; Shanks, Leslie; Mills, Clair F.; O'Brien, Daniel P.

    2009-01-01

    Background Concerns about false-positive HIV results led to a review of testing procedures used in a Médecins Sans Frontières (MSF) HIV programme in Bukavu, eastern Democratic Republic of Congo. In addition to the WHO HIV rapid diagnostic test algorithm (RDT) (two positive RDTs alone for HIV diagnosis) used in voluntary counselling and testing (VCT) sites we evaluated in situ a practical field-based confirmation test against western blot WB. In addition, we aimed to determine the false-positive rate of the WHO two-test algorithm compared with our adapted protocol including confirmation testing, and whether weakly reactive compared with strongly reactive rapid test results were more likely to be false positives. Methodology/Principal Findings 2864 clients presenting to MSF VCT centres in Bukavu during January to May 2006 were tested using Determine HIV-1/2® and UniGold HIV® rapid tests in parallel by nurse counsellors. Plasma samples on 229 clients confirmed as double RDT positive by laboratory retesting were further tested using both WB and the Orgenics Immunocomb Combfirm® HIV confirmation test (OIC-HIV). Of these, 24 samples were negative or indeterminate by WB representing a false-positive rate of the WHO two-test algorithm of 10.5% (95%CI 6.6-15.2). 17 of the 229 samples were weakly positive on rapid testing and all were negative or indeterminate by WB. The false-positive rate fell to 3.3% (95%CI 1.3–6.7) when only strong-positive rapid test results were considered. Agreement between OIC-HIV and WB was 99.1% (95%CI 96.9–99.9%) with no false OIC-HIV positives if stringent criteria for positive OIC-HIV diagnoses were used. Conclusions The WHO HIV two-test diagnostic algorithm produced an unacceptably high level of false-positive diagnoses in our setting, especially if results were weakly positive. The most probable causes of the false-positive results were serological cross-reactivity or non-specific immune reactivity. Our findings show that the OIC

  13. Porous silicon structures with high surface area/specific pore size

    DOEpatents

    Northrup, M.A.; Yu, C.M.; Raley, N.F.

    1999-03-16

    Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gases in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters. 9 figs.

  14. Porous silicon structures with high surface area/specific pore size

    DOEpatents

    Northrup, M. Allen; Yu, Conrad M.; Raley, Norman F.

    1999-01-01

    Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gasses in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters.

  15. Structure-based Design of Peptides with High Affinity and Specificity to HER2 Positive Tumors

    PubMed Central

    Geng, Lingling; Wang, Zihua; Yang, Xiaoliang; Li, Dan; Lian, Wenxi; Xiang, Zhichu; Wang, Weizhi; Bu, Xiangli; Lai, Wenjia; Hu, Zhiyuan; Fang, Qiaojun

    2015-01-01

    To identify peptides with high affinity and specificity against human epidermal growth factor receptor 2 (HER2), a series of peptides were designed based on the structure of HER2 and its Z(HER2:342) affibody. By using a combination protocol of molecular dynamics modeling, MM/GBSA binding free energy calculations, and binding free energy decomposition analysis, two novel peptides with 27 residues, pep27 and pep27-24M, were successfully obtained. Immunocytochemistry and flow cytometry analysis verified that both peptides can specifically bind to the extracellular domain of HER2 protein at cellular level. The Surface Plasmon Resonance imaging (SPRi) analysis showed that dissociation constants (KD) of these two peptides were around 300 nmol/L. Furthermore, fluorescence imaging of peptides against nude mice xenografted with SKBR3 cells indicated that both peptides have strong affinity and high specificity to HER2 positive tumors. PMID:26284145

  16. Highly specific and sensitive electrochemical genotyping via gap ligation reaction and surface hybridization detection.

    PubMed

    Huang, Yong; Zhang, Yan-Li; Xu, Xiangmin; Jiang, Jian-Hui; Shen, Guo-Li; Yu, Ru-Qin

    2009-02-25

    This paper developed a novel electrochemical genotyping strategy based on gap ligation reaction with surface hybridization detection. This strategy utilized homogeneous enzymatic reactions to generate molecular beacon-structured allele-specific products that could be cooperatively annealed to capture probes stably immobilized on the surface via disulfide anchors, thus allowing ultrasensitive surface hybridization detection of the allele-specific products through redox tags in close proximity to the electrode. Such a unique biphasic architecture provided a universal methodology for incorporating enzymatic discrimination reactions in electrochemical genotyping with desirable reproducibility, high efficiency and no interferences from interficial steric hindrance. The developed technique was demonstrated to show intrinsic high sensitivity for direct genomic analysis, and excellent specificity with discriminativity of single nucleotide variations.

  17. 75 FR 33731 - Atlantic Highly Migratory Species; 2010 Atlantic Bluefin Tuna Quota Specifications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-15

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration 50 CFR Part 635 RIN 0648-AY77 Atlantic Highly Migratory Species; 2010 Atlantic Bluefin Tuna Quota Specifications Correction In rule document 2010-13207...

  18. Effects of Collaborative Preteaching on Science Performance of High School Students with Specific Learning Disabilities

    ERIC Educational Resources Information Center

    Thornton, Amanda; McKissick, Bethany R.; Spooner, Fred; Lo, Ya-yu; Anderson, Adrienne L.

    2015-01-01

    Investigating the effectiveness of inclusive practices in science instruction and determining how to best support high school students with specific learning disabilities (SLD) in the general education classroom is a topic of increasing research attention in the field. In this study, the researchers conducted a single-subject multiple probe across…

  19. Using the SCR Specification Technique in a High School Programming Course.

    ERIC Educational Resources Information Center

    Rosen, Edward; McKim, James C., Jr.

    1992-01-01

    Presents the underlying ideas of the Software Cost Reduction (SCR) approach to requirements specifications. Results of applying this approach to the teaching of programing to high school students indicate that students perform better in writing programs. An appendix provides two examples of how the method is applied to problem solving. (MDH)

  20. Application of wavelet neural network model based on genetic algorithm in the prediction of high-speed railway settlement

    NASA Astrophysics Data System (ADS)

    Tang, Shihua; Li, Feida; Liu, Yintao; Lan, Lan; Zhou, Conglin; Huang, Qing

    2015-12-01

    With the advantage of high speed, big transport capacity, low energy consumption, good economic benefits and so on, high-speed railway is becoming more and more popular all over the world. It can reach 350 kilometers per hour, which requires high security performances. So research on the prediction of high-speed railway settlement that as one of the important factors affecting the safety of high-speed railway becomes particularly important. This paper takes advantage of genetic algorithms to seek all the data in order to calculate the best result and combines the advantage of strong learning ability and high accuracy of wavelet neural network, then build the model of genetic wavelet neural network for the prediction of high-speed railway settlement. By the experiment of back propagation neural network, wavelet neural network and genetic wavelet neural network, it shows that the absolute value of residual errors in the prediction of high-speed railway settlement based on genetic algorithm is the smallest, which proves that genetic wavelet neural network is better than the other two methods. The correlation coefficient of predicted and observed value is 99.9%. Furthermore, the maximum absolute value of residual error, minimum absolute value of residual error-mean value of relative error and value of root mean squared error(RMSE) that predicted by genetic wavelet neural network are all smaller than the other two methods'. The genetic wavelet neural network in the prediction of high-speed railway settlement is more stable in terms of stability and more accurate in the perspective of accuracy.

  1. An algorithmic approach to automated high-throughput identification of disulfide connectivity in proteins using tandem mass spectrometry.

    PubMed

    Lee, Timothy; Singh, Rahul; Yen, Ten-Yang; Macher, Bruce

    2007-01-01

    Knowledge of the pattern of disulfide linkages in a protein leads to a better understanding of its tertiary structure and biological function. At the state-of-the-art, liquid chromatography/electrospray ionization-tandem mass spectrometry (LC/ESI-MS/MS) can produce spectra of the peptides in a protein that are putatively joined by a disulfide bond. In this setting, efficient algorithms are required for matching the theoretical mass spaces of all possible bonded peptide fragments to the experimentally derived spectra to determine the number and location of the disulfide bonds. The algorithmic solution must also account for issues associated with interpreting experimental data from mass spectrometry, such as noise, isotopic variation, neutral loss, and charge state uncertainty. In this paper, we propose a algorithmic approach to high-throughput disulfide bond identification using data from mass spectrometry, that addresses all the aforementioned issues in a unified framework. The complexity of the proposed solution is of the order of the input spectra. The efficacy and efficiency of the method was validated using experimental data derived from proteins with with diverse disulfide linkage patterns.

  2. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    PubMed

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  3. Brief Report: Exploratory Analysis of the ADOS Revised Algorithm--Specificity and Predictive Value with Hispanic Children Referred for Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Overton, Terry; Fielding, Cheryl; de Alba, Roman Garcia

    2008-01-01

    This study compared Autism diagnostic observation schedule (ADOS) algorithm scores of a sample of 26 children who were administered modules 1-3 of the ADOS with the scores obtained applying the revised ADOS algorithm proposed by Gotham et al. (2007). Results of this application were inconsistent, yielding slightly more accurate results for module…

  4. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive

  5. Development of a phantom to validate high-dose-rate brachytherapy treatment planning systems with heterogeneous algorithms

    SciTech Connect

    Moura, Eduardo S.; Rostelato, Maria Elisa C. M.; Zeituni, Carlos A.

    2015-04-15

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. To compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The

  6. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  7. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.

    PubMed

    Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-09-01

    Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic

  8. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  9. Novel fluorescently labeled peptide compounds for detection of oxidized low-density lipoprotein at high specificity.

    PubMed

    Sato, Akira; Yamanaka, Hikaru; Oe, Keitaro; Yamazaki, Yoji; Ebina, Keiichi

    2014-10-01

    The probes for specific detection of oxidized low-density lipoprotein (ox-LDL) in plasma and in atherosclerotic plaques are expected to be useful for the identification, diagnosis, prevention, and treatment for atherosclerosis. In this study, to develop a fluorescent peptide probe for specific detection of ox-LDL, we investigated the interaction of fluorescein isothiocyanate (FITC)-labeled peptides with ox-LDL using polyacrylamide gel electrophoresis. Two heptapeptides (KWYKDGD and KP6) coupled through the ε-amino group of K at the N-terminus to FITC in the presence/absence of 6-amino-n-caproic acid (AC) linker to FITC--(FITC-AC)KP6 and (FITC)KP6--both bound with high specificity to ox-LDL in a dose-dependent manner. In contrast, a tetrapeptide (YKDG) labeled with FITC at the N-terminus and a pentapeptide (YKDGK) coupled through the ε-amino group of K at the C-terminus to FITC did not bind selectively to ox-LDL. Furthermore, (FITC)KP6 and (FITC-AC)KP6 bound with high specificity to the protein in mouse plasma (probably ox-LDL fraction). These findings strongly suggest that (FITC)KP6 and (FITC-AC)KP6 may be effective novel fluorescent probes for specific detection of ox-LDL.

  10. A New Switching-Based Median Filtering Scheme and Algorithm for Removal of High-Density Salt and Pepper Noise in Images

    NASA Astrophysics Data System (ADS)

    Jayaraj, V.; Ebenezer, D.

    2010-12-01

    A new switching-based median filtering scheme for restoration of images that are highly corrupted by salt and pepper noise is proposed. An algorithm based on the scheme is developed. The new scheme introduces the concept of substitution of noisy pixels by linear prediction prior to estimation. A novel simplified linear predictor is developed for this purpose. The objective of the scheme and algorithm is the removal of high-density salt and pepper noise in images. The new algorithm shows significantly better image quality with good PSNR, reduced MSE, good edge preservation, and reduced streaking. The good performance is achieved with reduced computational complexity. A comparison of the performance is made with several existing algorithms in terms of visual and quantitative results. The performance of the proposed scheme and algorithm is demonstrated.

  11. Transgenic mice for intersectional targeting of neural sensors and effectors with high specificity and performance

    PubMed Central

    Madisen, Linda; Garner, Aleena R.; Shimaoka, Daisuke; Chuong, Amy S.; Klapoetke, Nathan C.; Li, Lu; van der Bourg, Alexander; Niino, Yusuke; Egolf, Ladan; Monetti, Claudio; Gu, Hong; Mills, Maya; Cheng, Adrian; Tasic, Bosiljka; Nguyen, Thuc Nghi; Sunkin, Susan M.; Benucci, Andrea; Nagy, Andras; Miyawaki, Atsushi; Helmchen, Fritjof; Empson, Ruth M.; Knöpfel, Thomas; Boyden, Edward S.; Reid, R. Clay; Carandini, Matteo; Zeng, Hongkui

    2015-01-01

    Summary An increasingly powerful approach for studying brain circuits relies on targeting genetically encoded sensors and effectors to specific cell types. However, current approaches for this are still limited in functionality and specificity. Here we utilize several intersectional strategies to generate multiple transgenic mouse lines expressing high levels of novel genetic tools with high specificity. We developed driver and double reporter mouse lines and viral vectors using the Cre/Flp and Cre/Dre double recombinase systems, and established a new, retargetable genomic locus, TIGRE, which allowed the generation of a large set of Cre/tTA dependent reporter lines expressing fluorescent proteins, genetically encoded calcium, voltage, or glutamate indicators, and optogenetic effectors, all at substantially higher levels than before. High functionality was shown in example mouse lines for GCaMP6, YCX2.60, VSFP Butterfly 1.2, and Jaws. These novel transgenic lines greatly expand the ability to monitor and manipulate neuronal activities with increased specificity. PMID:25741722

  12. A high-speed, high-efficiency phase controller for coherent beam combining based on SPGD algorithm

    SciTech Connect

    Huang, Zh M; Liu, C L; Li, J F; Zhang, D Y

    2014-04-28

    A phase controller for coherent beam combining (CBC) of fibre lasers has been designed and manufactured based on a stochastic parallel gradient descent (SPGD) algorithm and a field programmable gate array (FPGA). The theoretical analysis shows that the iteration rate is higher than 1.9 MHz, and the average compensation bandwidth of CBC for 5 or 20 channels is 50 kHz or 12.5 kHz, respectively. The tests show that the phase controller ensures reliable phase locking of lasers: When the phases of five lasers are locked by the improved control strategy with a variable gain, the energy encircled in the target is increased by 23 times than that in the single output, the phase control accuracy is better than λ/20, and the combining efficiency is 92%. (control of laser radiation parameters)

  13. Specific heat of pristine and brominated graphite fibers, composites and HOPG. [Highly Oriented Pyrolytic Graphite

    NASA Technical Reports Server (NTRS)

    Hung, Ching-Chen; Maciag, Carolyn

    1987-01-01

    Differential scanning calorimetry was used to obtain specific heat values of pristine and brominated P-100 graphite fibers and brominated P-100/epoxy composite as well as pristine and brominated highly oriented pyrolytic graphite (HOPG) for comparison. Based on the experimental results obtained, specific heat values are calculated for several different temperatures, with a standard deviation estimated at 1.4 percent of the average values. The data presented here are useful in designing heat transfer devices (such as airplane de-icing heaters) from bromine fibers.

  14. High sensitivity and specificity of elevated cerebrospinal fluid kappa free light chains in suspected multiple sclerosis.

    PubMed

    Hassan-Smith, G; Durant, L; Tsentemeidou, A; Assi, L K; Faint, J M; Kalra, S; Douglas, M R; Curnow, S J

    2014-11-15

    Cerebrospinal fluid (CSF) analysis is routinely used in the diagnostic work-up of multiple sclerosis (MS), by detecting CSF-specific oligoclonal bands (OCB). More recently, several studies have reported CSF free light chains (FLC) as an alternative. We show that absolute CSF κFLC concentrations were highly sensitive - more than OCB testing - and specific for clinically isolated syndrome, relapsing remitting and primary progressive MS. Measurement of κFLC alone was sufficient. Our results suggest that CSF κFLC levels measured by nephelometry, if validated in a larger series, are a preferred test to OCB analysis in the diagnostic work-up of patients suspected of having MS.

  15. Toward site-specific, homogeneous and highly stable fluorescent silver nanoclusters fabrication on triplex DNA scaffolds

    PubMed Central

    Feng, Lingyan; Huang, Zhenzhen; Ren, Jinsong; Qu, Xiaogang

    2012-01-01

    A new strategy to create site-specific, homogeneous, and bright silver nanoclusters (AgNCs) with high-stability was demonstrated by triplex DNA as template. By reasonable design of DNA sequence, homogeneous Ag2 cluster was obtained in the predefined position of CG.C+ site of triplex DNA. This strategy was also explored for controlled alignment of AgNCs on the DNA nanoscaffold. To the best of our knowledge, this was the first example to simultaneously answer the challenges of excellent site-specific nucleation and growth, homogeneity and stability against salt of DNA-templated AgNCs. PMID:22570417

  16. Direct glass bonded high specific power silicon solar cells for space applications

    NASA Technical Reports Server (NTRS)

    Dinetta, L. C.; Rand, J. A.; Cummings, J. R.; Lampo, S. M.; Shreve, K. P.; Barnett, Allen M.

    1991-01-01

    A lightweight, radiation hard, high performance, ultra-thin silicon solar cell is described that incorporates light trapping and a cover glass as an integral part of the device. The manufacturing feasibility of high specific power, radiation insensitive, thin silicon solar cells was demonstrated experimentally and with a model. Ultra-thin, light trapping structures were fabricated and the light trapping demonstrated experimentally. The design uses a micro-machined, grooved back surface to increase the optical path length by a factor of 20. This silicon solar cell will be highly tolerant to radiation because the base width is less than 25 microns making it insensitive to reduction in minority carrier lifetime. Since the silicon is bonded without silicone adhesives, this solar cell will also be insensitive to UV degradation. These solar cells are designed as a form, fit, and function replacement for existing state of the art silicon solar cells with the effect of simultaneously increasing specific power, power/area, and power supply life. Using a 3-mil thick cover glass and a 0.3 g/sq cm supporting Al honeycomb, a specific power for the solar cell plus cover glass and honeycomb of 80.2 W/Kg is projected. The development of this technology can result in a revolutionary improvement in high survivability silicon solar cell products for space with the potential to displace all existing solar cell technologies for single junction space applications.

  17. Phthalonitrile-Based Carbon Foam with High Specific Mechanical Strength and Superior Electromagnetic Interference Shielding Performance.

    PubMed

    Zhang, Liying; Liu, Ming; Roy, Sunanda; Chu, Eng Kee; See, Kye Yak; Hu, Xiao

    2016-03-23

    Electromagnetic interference (EMI) performance materials are urgently needed to relieve the increasing stress over electromagnetic pollution problems arising from the growing demand for electronic and electrical devices. In this work, a novel ultralight (0.15 g/cm(3)) carbon foam was prepared by direct carbonization of phthalonitrile (PN)-based polymer foam aiming to simultaneously achieve high EMI shielding effectiveness (SE) and deliver effective weight reduction without detrimental reduction of the mechanical properties. The carbon foam prepared by this method had specific compressive strength of ∼6.0 MPa·cm(3)/g. High EMI SE of ∼51.2 dB was achieved, contributed by its intrinsic nitrogen-containing structure (3.3 wt% of nitrogen atoms). The primary EMI shielding mechanism of such carbon foam was determined to be absorption. Moreover, the carbon foams showed excellent specific EMI SE of 341.1 dB·cm(3)/g, which was at least 2 times higher than most of the reported material. The remarkable EMI shielding performance combined with high specific compressive strength indicated that the carbon foam could be considered as a low-density and high-performance EMI shielding material for use in areas where mechanical integrity is desired.

  18. Characterization of specific high affinity receptors for human tumor necrosis factor on mouse fibroblasts

    SciTech Connect

    Hass, P.E.; Hotchkiss, A.; Mohler, M.; Aggarwal, B.B.

    1985-10-05

    Mouse L-929 fibroblasts, an established line of cells, are very sensitive to lysis by human lymphotoxin (hTNF-beta). Specific binding of a highly purified preparation of hTNF-beta to these cells was examined. Recombinant DNA-derived hTNF-beta was radiolabeled with (TH)propionyl succinimidate at the lysine residues of the molecule to a specific activity of 200 microCi/nmol of protein. (TH)hTNF-beta was purified by high performance gel permeation chromatography and the major fraction was found to be monomeric by sodium dodecyl sulfate-polyacrylamide gel electrophoresis. The labeled hTNF-beta was fully active in causing lysis of L-929 fibroblasts and bound specifically to high affinity binding sites on these cells. Scatchard analysis of the binding data revealed the presence of a single class of high affinity receptors with an apparent Kd of 6.7 X 10(-11) M and a capacity of 3200 binding sites/cell. Unlabeled recombinant DNA-derived hTNF-beta was found to be approximately 5-fold more effective competitive inhibitor of binding than the natural hTNF-beta. The binding of hTNF-beta to these mouse fibroblasts was also correlated with the ultimate cell lysis. Neutralizing polyclonal antibodies to hTNF-beta efficiently inhibited the binding of (TH)hTNF-beta to the cells. The authors conclude that the specific high affinity binding site is the receptor for hTNF-beta and may be involved in lysis of cells.

  19. Hydrazide functionalized core-shell magnetic nanocomposites for highly specific enrichment of N-glycopeptides.

    PubMed

    Liu, Liting; Yu, Meng; Zhang, Ying; Wang, Changchun; Lu, Haojie

    2014-05-28

    In view of the biological significance of glycosylation for human health, profiling of glycoproteome from complex biological samples is highly inclined toward the discovery of disease biomarkers and clinical diagnosis. Nevertheless, because of the existence of glycopeptides at relatively low abundances compared with nonglycosylated peptides and glycan microheterogeneity, glycopeptides need to be highly selectively enriched from complex biological samples for mass spectrometry analysis. Herein, a new type of hydrazide functionalized core-shell magnetic nanocomposite has been synthesized for highly specific enrichment of N-glycopeptides. The nanocomposites with both the magnetic core and the polymer shell hanging high density of hydrazide groups were prepared by first functionalization of the magnetic core with polymethacrylic acid by reflux precipitation polymerization to obtain the Fe3O4@poly(methacrylic acid) (Fe3O4@PMAA) and then modification of the surface of Fe3O4@PMAA with adipic acid dihydrazide (ADH) to obtain Fe3O4@poly(methacrylic hydrazide) (Fe3O4@PMAH). The abundant hydrazide groups toward highly specific enrichment of glycopeptides and the magnetic core make it suitable for large-scale, high-throughput, and automated sample processing. In addition, the hydrophilic polymer surface can provide low nonspecific adsorption of other peptides. Compared to commercially available hydrazide resin, Fe3O4@PMAH improved more than 5 times the signal-to-noise ratio of standard glycopeptides. Finally, this nanocomposite was applied in the profiling of N-glycoproteome from the colorectal cancer patient serum. In total, 175 unique glycopeptides and 181 glycosylation sites corresponding to 63 unique glycoproteins were identified in three repeated experiments, with the specificities of the enriched glycopeptides and corresponding glycoproteins of 69.6% and 80.9%, respectively. Because of all these attractive features, we believe that this novel hydrazide functionalized

  20. A Rapid In-Clinic Test Detects Acute Leptospirosis in Dogs with High Sensitivity and Specificity.

    PubMed

    Kodjo, Angeli; Calleja, Christophe; Loenser, Michael; Lin, Dan; Lizer, Joshua

    2016-01-01

    A rapid IgM-detection immunochromatographic test (WITNESS® Lepto, Zoetis) has recently become available to identify acute canine leptospirosis at the point of care. Diagnostic sensitivity and specificity of the test were evaluated by comparison with the microscopic agglutination assay (MAT), using a positive cut-off titer of ≥800. Banked serum samples from dogs exhibiting clinical signs and suspected leptospirosis were selected to form three groups based on MAT titer: (1) positive (n = 50); (2) borderline (n = 35); and (3) negative (n = 50). Using an analysis to weight group sizes to reflect French prevalence, the sensitivity and specificity were 98% and 93.5% (88.2% unweighted), respectively. This test rapidly identifies cases of acute canine leptospirosis with high levels of sensitivity and specificity with no interference from previous vaccination. PMID:27110562

  1. A Rapid In-Clinic Test Detects Acute Leptospirosis in Dogs with High Sensitivity and Specificity

    PubMed Central

    Kodjo, Angeli; Calleja, Christophe; Loenser, Michael; Lin, Dan; Lizer, Joshua

    2016-01-01

    A rapid IgM-detection immunochromatographic test (WITNESS® Lepto, Zoetis) has recently become available to identify acute canine leptospirosis at the point of care. Diagnostic sensitivity and specificity of the test were evaluated by comparison with the microscopic agglutination assay (MAT), using a positive cut-off titer of ≥800. Banked serum samples from dogs exhibiting clinical signs and suspected leptospirosis were selected to form three groups based on MAT titer: (1) positive (n = 50); (2) borderline (n = 35); and (3) negative (n = 50). Using an analysis to weight group sizes to reflect French prevalence, the sensitivity and specificity were 98% and 93.5% (88.2% unweighted), respectively. This test rapidly identifies cases of acute canine leptospirosis with high levels of sensitivity and specificity with no interference from previous vaccination. PMID:27110562

  2. Synthesis of high specific activity (1- sup 3 H) farnesyl pyrophosphate

    SciTech Connect

    Saljoughian, M.; Morimoto, H.; Williams, P.G.

    1991-08-01

    The synthesis of tritiated farnesyl pyrophosphate with high specific activity is reported. trans-trans Farnesol was oxidized to the corresponding aldehyde followed by reduction with lithium aluminium tritide (5%-{sup 3}H) to give trans-trans (1-{sup 3}H)farnesol. The specific radioactivity of the alcohol was determined from its triphenylsilane derivative, prepared under very mild conditions. The tritiated alcohol was phosphorylated by initial conversion to an allylic halide, and subsequent treatment of the halide with tris-tetra-n-butylammonium hydrogen pyrophosphate. The hydride procedure followed in this work has advantages over existing methods for the synthesis of tritiated farnesyl pyrophosphate, with the possibility of higher specific activity and a much higher yield obtained. 10 refs., 3 figs.

  3. Cyclotron production of ``very high specific activity'' platinum radiotracers in No Carrier Added form

    NASA Astrophysics Data System (ADS)

    Birattari, C.; Bonardi, M.; Groppi, F.; Gini, L.; Gallorini, M.; Sabbioni, E.; Stroosnijder, M. F.

    2001-12-01

    At the "Radiochemistry Laboratory" of Accelerators and Applied Superconductivity Laboratory, LASA, several production and quality assurance methods for short-lived and high specific activity radionuclides, have been developed. Presently, the irradiations are carried out at the Scanditronix MC40 cyclotron (K=38; p, d, He-4 and He-3) of JRC-Ispra, Italy, of the European Community, while both chemical purity and specific activity determination are carried out at the TRIGA MARK II research reactor of University of Pavia and at LASA itself. In order to optimize the irradiation conditions for platinum radiotracer production, both thin- and thick-target excitation function of natOs(α,xn) nuclear reactions were measured. A very selective radiochemical separation to obtain Pt radiotracers in No Carrier Added form, has been developed. Both real specific activity and chemical purity of radiotracer, have been determined by neutron activation analysis and atomic absorption spectrometry. An Isotopic Dilution Factor (IDF) of the order of 50 is achieved.

  4. A Highly Flexible, Automated System Providing Reliable Sample Preparation in Element- and Structure-Specific Measurements.

    PubMed

    Vorberg, Ellen; Fleischer, Heidi; Junginger, Steffen; Liu, Hui; Stoll, Norbert; Thurow, Kerstin

    2016-10-01

    Life science areas require specific sample pretreatment to increase the concentration of the analytes and/or to convert the analytes into an appropriate form for the detection and separation systems. Various workstations are commercially available, allowing for automated biological sample pretreatment. Nevertheless, due to the required temperature, pressure, and volume conditions in typical element and structure-specific measurements, automated platforms are not suitable for analytical processes. Thus, the purpose of the presented investigation was the design, realization, and evaluation of an automated system ensuring high-precision sample preparation for a variety of analytical measurements. The developed system has to enable system adaption and high performance flexibility. Furthermore, the system has to be capable of dealing with the wide range of required vessels simultaneously, allowing for less cost and time-consuming process steps. However, the system's functionality has been confirmed in various validation sequences. Using element-specific measurements, the automated system was up to 25% more precise compared to the manual procedure and as precise as the manual procedure using structure-specific measurements.

  5. A novel and highly specific phage endolysin cell wall binding domain for detection of Bacillus cereus.

    PubMed

    Kong, Minsuk; Sim, Jieun; Kang, Taejoon; Nguyen, Hoang Hiep; Park, Hyun Kyu; Chung, Bong Hyun; Ryu, Sangryeol

    2015-09-01

    Rapid, specific and sensitive detection of pathogenic bacteria is crucial for public health and safety. Bacillus cereus is harmful as it causes foodborne illness and a number of systemic and local infections. We report a novel phage endolysin cell wall-binding domain (CBD) for B. cereus and the development of a highly specific and sensitive surface plasmon resonance (SPR)-based B. cereus detection method using the CBD. The newly discovered CBD from endolysin of PBC1, a B. cereus-specific bacteriophage, provides high specificity and binding capacity to B. cereus. By using the CBD-modified SPR chips, B. cereus can be detected at the range of 10(5)-10(8) CFU/ml. More importantly, the detection limit can be improved to 10(2) CFU/ml by using a subtractive inhibition assay based on the pre-incubation of B. cereus and CBDs, removal of CBD-bound B. cereus, and SPR detection of the unbound CBDs. The present study suggests that the small and genetically engineered CBDs can be promising biological probes for B. cereus. We anticipate that the CBD-based SPR-sensing methods will be useful for the sensitive, selective, and rapid detection of B. cereus.

  6. A novel and highly specific phage endolysin cell wall binding domain for detection of Bacillus cereus.

    PubMed

    Kong, Minsuk; Sim, Jieun; Kang, Taejoon; Nguyen, Hoang Hiep; Park, Hyun Kyu; Chung, Bong Hyun; Ryu, Sangryeol

    2015-09-01

    Rapid, specific and sensitive detection of pathogenic bacteria is crucial for public health and safety. Bacillus cereus is harmful as it causes foodborne illness and a number of systemic and local infections. We report a novel phage endolysin cell wall-binding domain (CBD) for B. cereus and the development of a highly specific and sensitive surface plasmon resonance (SPR)-based B. cereus detection method using the CBD. The newly discovered CBD from endolysin of PBC1, a B. cereus-specific bacteriophage, provides high specificity and binding capacity to B. cereus. By using the CBD-modified SPR chips, B. cereus can be detected at the range of 10(5)-10(8) CFU/ml. More importantly, the detection limit can be improved to 10(2) CFU/ml by using a subtractive inhibition assay based on the pre-incubation of B. cereus and CBDs, removal of CBD-bound B. cereus, and SPR detection of the unbound CBDs. The present study suggests that the small and genetically engineered CBDs can be promising biological probes for B. cereus. We anticipate that the CBD-based SPR-sensing methods will be useful for the sensitive, selective, and rapid detection of B. cereus. PMID:26043681

  7. Highly sensitive and specific colorimetric detection of cancer cells via dual-aptamer target binding strategy.

    PubMed

    Wang, Kun; Fan, Daoqing; Liu, Yaqing; Wang, Erkang

    2015-11-15

    Simple, rapid, sensitive and specific detection of cancer cells is of great importance for early and accurate cancer diagnostics and therapy. By coupling nanotechnology and dual-aptamer target binding strategies, we developed a colorimetric assay for visually detecting cancer cells with high sensitivity and specificity. The nanotechnology including high catalytic activity of PtAuNP and magnetic separation & concentration plays a vital role on the signal amplification and improvement of detection sensitivity. The color change caused by small amount of target cancer cells (10 cells/mL) can be clearly distinguished by naked eyes. The dual-aptamer target binding strategy guarantees the detection specificity that large amount of non-cancer cells and different cancer cells (10(4) cells/mL) cannot cause obvious color change. A detection limit as low as 10 cells/mL with detection linear range from 10 to 10(5) cells/mL was reached according to the experimental detections in phosphate buffer solution as well as serum sample. The developed enzyme-free and cost effective colorimetric assay is simple and no need of instrument while still provides excellent sensitivity, specificity and repeatability, having potential application on point-of-care cancer diagnosis.

  8. High Transferability of Homoeolog-Specific Markers between Bread Wheat and Newly Synthesized Hexaploid Wheat Lines

    PubMed Central

    Zeng, Deying; Luo, Jiangtao; Li, Zenglin; Chen, Gang; Zhang, Lianquan; Ning, Shunzong; Yuan, Zhongwei; Zheng, Youliang; Hao, Ming; Liu, Dengcai

    2016-01-01

    Bread wheat (Triticum aestivum, 2n = 6x = 42, AABBDD) has a complex allohexaploid genome, which makes it difficult to differentiate between the homoeologous sequences and assign them to the chromosome A, B, or D subgenomes. The chromosome-based draft genome sequence of the ‘Chinese Spring’ common wheat cultivar enables the large-scale development of polymerase chain reaction (PCR)-based markers specific for homoeologs. Based on high-confidence ‘Chinese Spring’ genes with known functions, we developed 183 putative homoeolog-specific markers for chromosomes 4B and 7B. These markers were used in PCR assays for the 4B and 7B nullisomes and their euploid synthetic hexaploid wheat (SHW) line that was newly generated from a hybridization between Triticum turgidum (AABB) and the wild diploid species Aegilops tauschii (DD). Up to 64% of the markers for chromosomes 4B or 7B in the SHW background were confirmed to be homoeolog-specific. Thus, these markers were highly transferable between the ‘Chinese Spring’ bread wheat and SHW lines. Homoeolog-specific markers designed using genes with known functions may be useful for genetic investigations involving homoeologous chromosome tracking and homoeolog expression and interaction analyses. PMID:27611704

  9. High Transferability of Homoeolog-Specific Markers between Bread Wheat and Newly Synthesized Hexaploid Wheat Lines.

    PubMed

    Zeng, Deying; Luo, Jiangtao; Li, Zenglin; Chen, Gang; Zhang, Lianquan; Ning, Shunzong; Yuan, Zhongwei; Zheng, Youliang; Hao, Ming; Liu, Dengcai

    2016-01-01

    Bread wheat (Triticum aestivum, 2n = 6x = 42, AABBDD) has a complex allohexaploid genome, which makes it difficult to differentiate between the homoeologous sequences and assign them to the chromosome A, B, or D subgenomes. The chromosome-based draft genome sequence of the 'Chinese Spring' common wheat cultivar enables the large-scale development of polymerase chain reaction (PCR)-based markers specific for homoeologs. Based on high-confidence 'Chinese Spring' genes with known functions, we developed 183 putative homoeolog-specific markers for chromosomes 4B and 7B. These markers were used in PCR assays for the 4B and 7B nullisomes and their euploid synthetic hexaploid wheat (SHW) line that was newly generated from a hybridization between Triticum turgidum (AABB) and the wild diploid species Aegilops tauschii (DD). Up to 64% of the markers for chromosomes 4B or 7B in the SHW background were confirmed to be homoeolog-specific. Thus, these markers were highly transferable between the 'Chinese Spring' bread wheat and SHW lines. Homoeolog-specific markers designed using genes with known functions may be useful for genetic investigations involving homoeologous chromosome tracking and homoeolog expression and interaction analyses. PMID:27611704

  10. Inverse transformation algorithm of transient electromagnetic field and its high-resolution continuous imaging interpretation method

    NASA Astrophysics Data System (ADS)

    Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua

    2015-04-01

    We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution. Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result.

  11. Analysis of high resolution FTIR spectra from synchrotron sources using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    van Wijngaarden, Jennifer; Desmond, Durell; Leo Meerts, W.

    2015-09-01

    Room temperature Fourier transform infrared spectra of the four-membered heterocycle trimethylene sulfide were collected with a resolution of 0.00096 cm-1 using synchrotron radiation from the Canadian Light Source from 500 to 560 cm-1. The in-plane ring deformation mode (ν13) at ∼529 cm-1 exhibits dense rotational structure due to the presence of ring inversion tunneling and leads to a doubling of all transitions. Preliminary analysis of the experimental spectrum was pursued via traditional methods involving assignment of quantum numbers to individual transitions in order to conduct least squares fitting to determine the spectroscopic parameters. Following this approach, the assignment of 2358 transitions led to the experimental determination of an effective Hamiltonian. This model describes transitions in the P and R branches to J‧ = 60 and Ka‧ = 10 that connect the tunneling split ground and vibrationally excited states of the ν13 band although a small number of low intensity features remained unassigned. The use of evolutionary algorithms (EA) for automated assignment was explored in tandem and yielded a set of spectroscopic constants that re-create this complex experimental spectrum to a similar degree. The EA routine was also applied to the previously well-understood ring puckering vibration of another four-membered ring, azetidine (Zaporozan et al., 2010). This test provided further evidence of the robust nature of the EA method when applied to spectra for which the underlying physics is well understood.

  12. High-energy mode-locked fiber lasers using multiple transmission filters and a genetic algorithm.

    PubMed

    Fu, Xing; Kutz, J Nathan

    2013-03-11

    We theoretically demonstrate that in a laser cavity mode-locked by nonlinear polarization rotation (NPR) using sets of waveplates and passive polarizer, the energy performance can be significantly increased by incorporating multiple NPR filters. The NPR filters are engineered so as to mitigate the multi-pulsing instability in the laser cavity which is responsible for limiting the single pulse per round trip energy in a myriad of mode-locked cavities. Engineering of the NPR filters for performance is accomplished by implementing a genetic algorithm that is capable of systematically identifying viable and optimal NPR settings in a vast parameter space. Our study shows that five NPR filters can increase the cavity energy by approximately a factor of five, with additional NPRs contributing little or no enhancements beyond this. With the advent and demonstration of electronic controls for waveplates and polarizers, the analysis suggests a general design and engineering principle that can potentially close the order of magnitude energy gap between fiber based mode-locked lasers and their solid state counterparts.

  13. Production and characterization of highly specific monoclonal antibodies to D-glutamic acid.

    PubMed

    Sakamoto, Seiichi; Matsuura, Yurino; Yonenaga, Yayoi; Tsuneura, Yumi; Aso, Mariko; Kurose, Hitoshi; Tanaka, Hiroyuki; Morimoto, Satoshi

    2014-12-01

    Most of the functions of D-amino acids (D-AA) remain unclear because of little analytic methods for specific detection/determination. In this study, a highly specific monoclonal antibody to D-glutamic acid (D-Glu-MAb) was produced using a hybridoma method. Characterization of D-Glu-MAb by indirect enzyme-linked immunosorbent assay (ELISA) revealed that it has high selectivity against D-Glu-glutaraldehyde (GA) conjugates, while no cross-reaction was observed when 38 other kinds of AA-GA conjugates were used. Moreover, subsequent indirect competitive ELISA disclosed that an epitope of D-Glu-MAb is a D-Glu-GA molecule in the conjugates, suggesting that D-Glu-MAb could be a useful tool to investigate the functional analysis of D-Glu in immunostaining.

  14. Facile synthesis of boronic acid-functionalized magnetic carbon nanotubes for highly specific enrichment of glycopeptides

    NASA Astrophysics Data System (ADS)

    Ma, Rongna; Hu, Junjie; Cai, Zongwei; Ju, Huangxian

    2014-02-01

    A stepwise strategy was developed to synthesize boronic acid functionalized magnetic carbon nanotubes (MCNTs) for highly specific enrichment of glycopeptides. The MCNTs were synthesized by a solvothermal reaction of Fe3+ loaded on the acid-treated CNTs and modified with 1-pyrenebutanoic acid N-hydroxysuccinimidyl ester (PASE) to bind aminophenylboronic acid (APBA) via an amide reaction. The introduction of PASE could bridge the MCNT and APBA, suppress the nonspecific adsorption and reduce the steric hindrance among the bound molecules. Due to the excellent structure of the MCNTs, the functionalization of PASE and then APBA on MCNTs was quite simple, specific and effective. The glycopeptides enrichment and separation with a magnetic field could be achieved by their reversible covalent binding with the boronic group of APBA-MCNTs. The exceptionally large specific surface area and the high density of boronic acid groups of APBA-MCNTs resulted in rapid and highly efficient enrichment of glycopeptides, even in the presence of large amounts of interfering nonglycopeptides. The functional MCNTs possessed high selectivity for enrichment of 21 glycopeptides from the digest of horseradish peroxidase demonstrated by MALDI-TOF mass spectrometric analysis showing more glycopeptides detected than the usual 9 glycopeptides with commercially available APBA-agarose. The proposed system showed better specificity for glycopeptides even in the presence of non-glycopeptides with 50 times higher concentration. The boronic acid functionalized MCNTs provide a promising selective enrichment platform for precise glycoproteomic analysis.A stepwise strategy was developed to synthesize boronic acid functionalized magnetic carbon nanotubes (MCNTs) for highly specific enrichment of glycopeptides. The MCNTs were synthesized by a solvothermal reaction of Fe3+ loaded on the acid-treated CNTs and modified with 1-pyrenebutanoic acid N-hydroxysuccinimidyl ester (PASE) to bind aminophenylboronic acid

  15. A new highly specific buprenorphine immunoassay for monitoring buprenorphine compliance and abuse.

    PubMed

    Melanson, Stacy E F; Snyder, Marion L; Jarolim, Petr; Flood, James G

    2012-04-01

    Urine buprenorphine screening is utilized to assess buprenorphine compliance and to detect illicit use. Robust screening assays should be specific for buprenorphine without cross-reactivity with other opioids, which are frequently present in patients treated for opioid addiction and chronic pain. We evaluated the new Lin-Zhi urine buprenorphine enzyme immunoassay (EIA) as a potentially more specific alternative to the Microgenics cloned enzyme donor immunoassay (CEDIA) by using 149 urines originating from patients treated for chronic pain and opioid addiction. The EIA methodology offered specific detection of buprenorphine use (100%) (106/106) and provided superior overall agreement with liquid chromatography-tandem mass spectrometry, 95% (142/149) and 91% (135/149) using 5 ng/mL (EIA[5]) and 10 ng/mL (EIA[10]) cutoffs, respectively, compared to CEDIA, 79% (117/149). CEDIA generated 27 false positives, most of which were observed in patients positive for other opioids, providing an overall specificity of 75% (79/106). CEDIA also demonstrated interference from structurally unrelated drugs, chloroquine and hydroxychloroquine. CEDIA and EIA[5] yielded similar sensitivities, both detecting 96% (22/23) of positive samples from patients prescribed buprenorphine, and 88% (38/43) and 81% (35/43), respectively, of all positive samples (illicit and prescribed users). The EIA methodology provides highly specific and sensitive detection of buprenorphine use, without the potential for opioid cross-reactivity.

  16. Improving TCP throughput performance on high-speed networks with a receiver-side adaptive acknowledgment algorithm

    NASA Astrophysics Data System (ADS)

    Yeung, Wing-Keung; Chang, Rocky K. C.

    1998-12-01

    A drastic TCP performance degradation was reported when TCP is operated on the ATM networks. This deadlock problem is 'caused' by the high speed provided by the ATM networks. Therefore this deadlock problem is shared by any high-speed networking technologies when TCP is run on them. The problems are caused by the interaction of the sender-side and receiver-side Silly Window Syndrome (SWS) avoidance algorithms because the network's Maximum Segment Size (MSS) is no longer small when compared with the sender and receiver socket buffer sizes. Here we propose a new receiver-side adaptive acknowledgment algorithm (RSA3) to eliminate the deadlock problems while maintaining the SWS avoidance mechanisms. Unlike the current delayed acknowledgment strategy, the RSA3 does not rely on the exact value of MSS an the receiver's buffer size to determine the acknowledgement threshold.Instead the RSA3 periodically probes the sender to estimate the maximum amount of data that can be sent without receiving acknowledgement from the receiver. The acknowledgment threshold is computed as 35 percent of the estimate. In this way, deadlock-free TCP transmission is guaranteed. Simulation studies have shown that the RSA3 even improves the throughput performance in some non-deadlock regions. This is due to a quicker response taken by the RSA3 receiver. We have also evaluated different acknowledgment thresholds. It is found that the case of 35 percent gives the best performance when the sender and receiver buffer sizes are large.

  17. Record-high specific conductance and temperature in San Francisco Bay during water year 2014

    USGS Publications Warehouse

    Downing-Kunz, Maureen; Work, Paul; Shellenbarger, Gregory

    2015-11-18

    In water year (WY) 2014 (October 1, 2013, through September 30, 2014), our network measured record-high values of specific conductance and water temperature at several stations during a period of very little freshwater inflow from the Sacramento–San Joaquin Delta and other tributaries because of severe drought conditions in California. This report summarizes our observations for WY2014 and compares them to previous years that had different levels of freshwater inflow.

  18. A microbial-mineralization approach for syntheses of iron oxides with a high specific surface area.

    PubMed

    Yagita, Naoki; Oaki, Yuya; Imai, Hiroaki

    2013-04-01

    Of minerals and microbes: A microbial-mineralization-inspired approach was used to facilitate the syntheses of iron oxides with a high specific surface area, such as 253 m(2)g(-1) for maghemite (γ-Fe(2)O(3)) and 148 m(2)g(-1) for hematite (α-Fe(2)O(3)). These iron oxides can be applied to electrode material of lithium-ion batteries, adsorbents, and catalysts.

  19. High specificity of a novel Zika virus ELISA in European patients after exposure to different flaviviruses.

    PubMed

    Huzly, Daniela; Hanselmann, Ingeborg; Schmidt-Chanasit, Jonas; Panning, Marcus

    2016-04-21

    The current Zika virus (ZIKV) epidemic in the Americas caused an increase in diagnostic requests in European countries. Here we demonstrate high specificity of the Euroimmun anti-ZIKV IgG and IgM ELISA tests using putative cross-reacting sera of European patients with antibodies against tick-borne encephalitis virus, dengue virus, yellow fever virus and hepatitis C virus. This test may aid in counselling European travellers returning from regions where ZIKV is endemic.

  20. Theory of specific heat of vortex liquid of high T c superconductors

    NASA Astrophysics Data System (ADS)

    Bai, Chen; Chi, Cheng; Wang, Jiangfan

    2016-10-01

    Superconducting thermal fluctuation (STF) plays an important role in both thermodynamic and transport properties in the vortex liquid phase of high T c superconductors. It was widely observed in the vicinity of the critical transition temperature. In the framework of Ginzburg-Landau-Lawrence-Doniach theory in magnetic field, a self-consistent analysis of STF including all Landau levels is given. Besides that, we calculate the contribution of STF to specific heat in vortex liquid phase for high T c cuprate superconductors, and the fitting results are in good agreement with experimental data. Project supported by the National Natural Science Foundation of China (Grant No. 11274018).

  1. Production of 191Pt radiotracer with high specific activity for the development of preconcentration procedures

    NASA Astrophysics Data System (ADS)

    Parent, M.; Strijckmans, K.; Cornelis, R.; Dewaele, J.; Dams, R.

    1994-04-01

    A radiotracer of Pt with suitable nuclear characteristics and high specific activity (i.e. activity to mass ratio) is a powerful tool when developing preconcentration methods for the determination of base-line levels of Pt in e.g. environmental and biological samples. Two methods were developed for the production of 191Pt with high specific activity and radionuclidic purity: (1) via the 190Pt(n, γ) 191Pt reaction by neutron irradiation of enriched Pt in a nuclear reactor at high neutron fluence rate and (2) via the 191Ir(p, n) 191Pt reaction by proton irradiation of natural Ir with a cyclotron, at an experimentally optimized proton energy. For the latter method it was necessary to separate Pt from the Ir matrix. For that reason either liquid-liquid extraction with dithizone or adsorption chromatography were used. The yields, the specific activities and the radionuclidic purities were experimentally determined as a function of the proton energy and compared to the former method. The half-life of 191Pt was accurately determined to be 2.802 ± 0.025 d.

  2. Pyrosequencing reveals highly diverse and species-specific microbial communities in sponges from the Red Sea.

    PubMed

    Lee, On On; Wang, Yong; Yang, Jiangke; Lafi, Feras F; Al-Suwailem, Abdulaziz; Qian, Pei-Yuan

    2011-04-01

    Marine sponges are associated with a remarkable array of microorganisms. Using a tag pyrosequencing technology, this study was the first to investigate in depth the microbial communities associated with three Red Sea sponges, Hyrtios erectus, Stylissa carteri and Xestospongia testudinaria. We revealed highly diverse sponge-associated bacterial communities with up to 1000 microbial operational taxonomic units (OTUs) and richness estimates of up to 2000 species. Altogether, 26 bacterial phyla were detected from the Red Sea sponges, 11 of which were absent from the surrounding sea water and 4 were recorded in sponges for the first time. Up to 100 OTUs with richness estimates of up to 300 archaeal species were revealed from a single sponge species. This is by far the highest archaeal diversity ever recorded for sponges. A non-negligible proportion of unclassified reads was observed in sponges. Our results demonstrated that the sponge-associated microbial communities remained highly consistent in the same sponge species from different locations, although they varied at different degrees among different sponge species. A significant proportion of the tag sequences from the sponges could be assigned to one of the sponge-specific clusters previously defined. In addition, the sponge-associated microbial communities were consistently divergent from those present in the surrounding sea water. Our results suggest that the Red Sea sponges possess highly sponge-specific or even sponge-species-specific microbial communities that are resistant to environmental disturbance, and much of their microbial diversity remains to be explored. PMID:21085196

  3. Highly specific SNP detection using 2D graphene electronics and DNA strand displacement

    PubMed Central

    Hwang, Michael T.; Landon, Preston B.; Lee, Joon; Choi, Duyoung; Mo, Alexander H.; Glinsky, Gennadi; Lal, Ratnesh

    2016-01-01

    Single-nucleotide polymorphisms (SNPs) in a gene sequence are markers for a variety of human diseases. Detection of SNPs with high specificity and sensitivity is essential for effective practical implementation of personalized medicine. Current DNA sequencing, including SNP detection, primarily uses enzyme-based methods or fluorophore-labeled assays that are time-consuming, need laboratory-scale settings, and are expensive. Previously reported electrical charge-based SNP detectors have insufficient specificity and accuracy, limiting their effectiveness. Here, we demonstrate the use of a DNA strand displacement-based probe on a graphene field effect transistor (FET) for high-specificity, single-nucleotide mismatch detection. The single mismatch was detected by measuring strand displacement-induced resistance (and hence current) change and Dirac point shift in a graphene FET. SNP detection in large double-helix DNA strands (e.g., 47 nt) minimize false-positive results. Our electrical sensor-based SNP detection technology, without labeling and without apparent cross-hybridization artifacts, would allow fast, sensitive, and portable SNP detection with single-nucleotide resolution. The technology will have a wide range of applications in digital and implantable biosensors and high-throughput DNA genotyping, with transformative implications for personalized medicine. PMID:27298347

  4. Highly specific SNP detection using 2D graphene electronics and DNA strand displacement.

    PubMed

    Hwang, Michael T; Landon, Preston B; Lee, Joon; Choi, Duyoung; Mo, Alexander H; Glinsky, Gennadi; Lal, Ratnesh

    2016-06-28

    Single-nucleotide polymorphisms (SNPs) in a gene sequence are markers for a variety of human diseases. Detection of SNPs with high specificity and sensitivity is essential for effective practical implementation of personalized medicine. Current DNA sequencing, including SNP detection, primarily uses enzyme-based methods or fluorophore-labeled assays that are time-consuming, need laboratory-scale settings, and are expensive. Previously reported electrical charge-based SNP detectors have insufficient specificity and accuracy, limiting their effectiveness. Here, we demonstrate the use of a DNA strand displacement-based probe on a graphene field effect transistor (FET) for high-specificity, single-nucleotide mismatch detection. The single mismatch was detected by measuring strand displacement-induced resistance (and hence current) change and Dirac point shift in a graphene FET. SNP detection in large double-helix DNA strands (e.g., 47 nt) minimize false-positive results. Our electrical sensor-based SNP detection technology, without labeling and without apparent cross-hybridization artifacts, would allow fast, sensitive, and portable SNP detection with single-nucleotide resolution. The technology will have a wide range of applications in digital and implantable biosensors and high-throughput DNA genotyping, with transformative implications for personalized medicine.

  5. Highly specific SNP detection using 2D graphene electronics and DNA strand displacement.

    PubMed

    Hwang, Michael T; Landon, Preston B; Lee, Joon; Choi, Duyoung; Mo, Alexander H; Glinsky, Gennadi; Lal, Ratnesh

    2016-06-28

    Single-nucleotide polymorphisms (SNPs) in a gene sequence are markers for a variety of human diseases. Detection of SNPs with high specificity and sensitivity is essential for effective practical implementation of personalized medicine. Current DNA sequencing, including SNP detection, primarily uses enzyme-based methods or fluorophore-labeled assays that are time-consuming, need laboratory-scale settings, and are expensive. Previously reported electrical charge-based SNP detectors have insufficient specificity and accuracy, limiting their effectiveness. Here, we demonstrate the use of a DNA strand displacement-based probe on a graphene field effect transistor (FET) for high-specificity, single-nucleotide mismatch detection. The single mismatch was detected by measuring strand displacement-induced resistance (and hence current) change and Dirac point shift in a graphene FET. SNP detection in large double-helix DNA strands (e.g., 47 nt) minimize false-positive results. Our electrical sensor-based SNP detection technology, without labeling and without apparent cross-hybridization artifacts, would allow fast, sensitive, and portable SNP detection with single-nucleotide resolution. The technology will have a wide range of applications in digital and implantable biosensors and high-throughput DNA genotyping, with transformative implications for personalized medicine. PMID:27298347

  6. Fabrication of high specificity hollow mesoporous silica nanoparticles assisted by Eudragit for targeted drug delivery.

    PubMed

    She, Xiaodong; Chen, Lijue; Velleman, Leonora; Li, Chengpeng; Zhu, Haijin; He, Canzhong; Wang, Tao; Shigdar, Sarah; Duan, Wei; Kong, Lingxue

    2015-05-01

    Hollow mesoporous silica nanoparticles (HMSNs) are one of the most promising carriers for effective drug delivery due to their large surface area, high volume for drug loading and excellent biocompatibility. However, the non-ionic surfactant templated HMSNs often have a broad size distribution and a defective mesoporous structure because of the difficulties involved in controlling the formation and organization of micelles for the growth of silica framework. In this paper, a novel "Eudragit assisted" strategy has been developed to fabricate HMSNs by utilising the Eudragit nanoparticles as cores and to assist in the self-assembly of micelle organisation. Highly dispersed mesoporous silica spheres with intact hollow interiors and through pores on the shell were fabricated. The HMSNs have a high surface area (670 m(2)/g), small diameter (120 nm) and uniform pore size (2.5 nm) that facilitated the effective encapsulation of 5-fluorouracil within HMSNs, achieving a high loading capacity of 194.5 mg(5-FU)/g(HMSNs). The HMSNs were non-cytotoxic to colorectal cancer cells SW480 and can be bioconjugated with Epidermal Growth Factor (EGF) for efficient and specific cell internalization. The high specificity and excellent targeting performance of EGF grafted HMSNs have demonstrated that they can become potential intracellular drug delivery vehicles for colorectal cancers via EGF-EGFR interaction. PMID:25617610

  7. High Sensitivity and High Detection Specificity of Gold-Nanoparticle-Grafted Nanostructured Silicon Mass Spectrometry for Glucose Analysis.

    PubMed

    Tsao, Chia-Wen; Yang, Zhi-Jie

    2015-10-14

    Desorption/ionization on silicon (DIOS) is a high-performance matrix-free mass spectrometry (MS) analysis method that involves using silicon nanostructures as a matrix for MS desorption/ionization. In this study, gold nanoparticles grafted onto a nanostructured silicon (AuNPs-nSi) surface were demonstrated as a DIOS-MS analysis approach with high sensitivity and high detection specificity for glucose detection. A glucose sample deposited on the AuNPs-nSi surface was directly catalyzed to negatively charged gluconic acid molecules on a single AuNPs-nSi chip for MS analysis. The AuNPs-nSi surface was fabricated using two electroless deposition steps and one electroless etching step. The effects of the electroless fabrication parameters on the glucose detection efficiency were evaluated. Practical application of AuNPs-nSi MS glucose analysis in urine samples was also demonstrated in this study.

  8. General anthropometric and specific physical fitness profile of high-level junior water polo players.

    PubMed

    Kondrič, Miran; Uljević, Ognjen; Gabrilo, Goran; Kontić, Dean; Sekulić, Damir

    2012-05-01

    The aim of this study was to investigate the status and playing position differences in anthropometric measures and specific physical fitness in high-level junior water polo players. The sample of subjects comprised 110 water polo players (17 to 18 years of age), including one of the world's best national junior teams for 2010. The subjects were divided according to their playing positions into: Centers (N = 16), Wings (N = 28), perimeter players (Drivers; N = 25), Points (N = 19), and Goalkeepers (N = 18). The variables included body height, body weight, body mass index, arm span, triceps- and subscapular-skinfold. Specific physical fitness tests comprised: four swimming tests, namely: 25m, 100m, 400m and a specific anaerobic 4x50m test (average result achieved in four 50m sprints with a 30 sec pause), vertical body jump (JUMP; maximal vertical jump from the water starting from a water polo defensive position) and a dynamometric power achieved in front crawl swimming (DYN). ANOVA with post-hoc comparison revealed significant differences between positions for most of the anthropometrics, noting that the Centers were the heaviest and had the highest BMI and subscapular skinfold. The Points achieved the best results in most of the swimming capacities and JUMP test. No significant group differences were found for the 100m and 4x50m tests. The Goalkeepers achieved the lowest results for DYN. Given the representativeness of the sample of subjects, the results of this study allow specific insights into the physical fitness and anthropometric features of high-level junior water polo players and allow coaches to design a specific training program aimed at achieving the physical fitness results presented for each playing position. PMID:23487152

  9. General Anthropometric and Specific Physical Fitness Profile of High-Level Junior Water Polo Players

    PubMed Central

    Kondrič, Miran; Uljević, Ognjen; Gabrilo, Goran; Kontić, Dean; Sekulić, Damir

    2012-01-01

    The aim of this study was to investigate the status and playing position differences in anthropometric measures and specific physical fitness in high-level junior water polo players. The sample of subjects comprised 110 water polo players (17 to 18 years of age), including one of the world’s best national junior teams for 2010. The subjects were divided according to their playing positions into: Centers (N = 16), Wings (N = 28), perimeter players (Drivers; N = 25), Points (N = 19), and Goalkeepers (N = 18). The variables included body height, body weight, body mass index, arm span, triceps- and subscapular-skinfold. Specific physical fitness tests comprised: four swimming tests, namely: 25m, 100m, 400m and a specific anaerobic 4x50m test (average result achieved in four 50m sprints with a 30 sec pause), vertical body jump (JUMP; maximal vertical jump from the water starting from a water polo defensive position) and a dynamometric power achieved in front crawl swimming (DYN). ANOVA with post-hoc comparison revealed significant differences between positions for most of the anthropometrics, noting that the Centers were the heaviest and had the highest BMI and subscapular skinfold. The Points achieved the best results in most of the swimming capacities and JUMP test. No significant group differences were found for the 100m and 4x50m tests. The Goalkeepers achieved the lowest results for DYN. Given the representativeness of the sample of subjects, the results of this study allow specific insights into the physical fitness and anthropometric features of high-level junior water polo players and allow coaches to design a specific training program aimed at achieving the physical fitness results presented for each playing position. PMID:23487152

  10. Fluorine-18-N-methylspiroperidol: radiolytic decomposition as a consequence of high specific activity and high dose levels

    SciTech Connect

    MacGregor, R.R.; Schlyer, D.J.; Fowler, J.S.; Wolf, A.P.; Shiue, C.Y.

    1987-01-01

    High specific activity (/sup 18/F)N-methylspiroperidol(8-(4-(4-(18F)fluorophenyl)-4-oxobutyl)-3-me thyl l-1-phenyl-1,3,8-triazaspiro(4.5)decan-4-one, 5-10 mCi/ml, 4-8 Ci/mumol at EOB) in saline solution undergoes significant radiolytic decomposition resulting in a decrease in radiochemical purity of 10-25% during the first hour. The rate of decomposition is affected by the specific activity, total dose to and chemical composition of the solution. That radiolysis is responsible for the observed decomposition was verified by the observation that unlabeled N-methylspiroperidol is decomposed in the presence of (18F)fluoride.

  11. Sample phenotype clusters in high-density oligonucleotide microarray data sets are revealed using Isomap, a nonlinear algorithm

    PubMed Central

    Dawson, Kevin; Rodriguez, Raymond L; Malyj, Wasyl

    2005-01-01

    Background Life processes are determined by the organism's genetic profile and multiple environmental variables. However the interaction between these factors is inherently non-linear [1]. Microarray data is one representation of the nonlinear interactions among genes and genes and environmental factors. Still most microarray studies use linear methods for the interpretation of nonlinear data. In this study, we apply Isomap, a nonlinear method of dimensionality reduction, to analyze three independent large Affymetrix high-density oligonucleotide microarray data sets. Results Isomap discovered low-dimensional structures embedded in the Affymetrix microarray data sets. These structures correspond to and help to interpret biological phenomena present in the data. This analysis provides examples of temporal, spatial, and functional processes revealed by the Isomap algorithm. In a spinal cord injury data set, Isomap discovers the three main modalities of the experiment – location and severity of the injury and the time elapsed after the injury. In a multiple tissue data set, Isomap discovers a low-dimensional structure that corresponds to anatomical locations of the source tissues. This model is capable of describing low- and high-resolution differences in the same model, such as kidney-vs.-brain and differences between the nuclei of the amygdala, respectively. In a high-throughput drug screening data set, Isomap discovers the monocytic and granulocytic differentiation of myeloid cells and maps several chemical compounds on the two-dimensional model. Conclusion Visualization of Isomap models provides useful tools for exploratory analysis of microarray data sets. In most instances, Isomap models explain more of the variance present in the microarray data than PCA or MDS. Finally, Isomap is a promising new algorithm for class discovery and class prediction in high-density oligonucleotide data sets. PMID:16076401

  12. Development of high-specificity antibodies against renal urate transporters using genetic immunization.

    PubMed

    Xu, Guoshuang; Chen, Xiangmei; Wu, Di; Shi, Suozhu; Wang, Jianzhong; Ding, Rui; Hong, Quan; Feng, Zhe; Lin, Shupeng; Lu, Yang

    2006-11-30

    Recently three proteins, playing central roles in the bidirectional transport of urate in renal proximal tubules, were identified: two members of the organic anion transporter (OAT) family, OAT1 and OAT3, and a protein that designated renal urate-anion exchanger (URAT1). Antibodies against these transporters are very important for investigating their expressions and functions. With the cytokine gene as a molecular adjuvant, genetic immunization-based antibody production offers several advantages including high specificity and high recognition to the native protein compared with current methods. We fused high antigenicity fragments of the three transporters to the plasmids pBQAP-TT containing T-cell epitopes and flanking regions from tetanus toxin, respectively. Gene gun immunization with these recombinant plasmids and two other adjuvant plasmids, which express granulocyte/ macrophage colony-stimulating factor and FMS-like tyrosine kinase 3 ligand, induced high level immunoglobulin G antibodies, respectively. The native corresponding proteins of URAT1, OAT1 and OAT3, in human kidney can be recognized by their specific antibodies, respectively, with Western blot analysis and immunohistochemistry. Besides, URAT1 expression in Xenopus oocytes can also be recognized by its corresponding antibody with immuno-fluorescence. The successful production of the antibodies has provided an important tool for the study of UA transporters. PMID:17129404

  13. Unusually high frequencies of HIV-specific cytotoxic T lymphocytes in humans.

    PubMed

    Hoffenbach, A; Langlade-Demoyen, P; Dadaglio, G; Vilmer, E; Michel, F; Mayaud, C; Autran, B; Plata, F

    1989-01-15

    CTL specific for the HIV belong to the CD8 subset of T lymphocytes, and their activity is restricted by class I HLA transplantation Ag. In this report, HIV-specific CTL and their precursor cells were quantified by limiting dilution analysis. CTL were recovered from the lungs, lymph nodes, and blood of asymptomatic seropositive carriers and of patients with AIDS. HIV was found to be very immunogenic. High frequencies of both HIV-specific CTL and CTL precursor cells were detected in infected individuals. These CTL killed autologous HIV-infected macrophages and T4 lymphoblasts. They also killed doubly transfected P815-A2-env-LAV mouse tumor cells, which express the human HLA-A2 gene and the HIV-1 env gene. In the longitudinal studies of two HIV-infected patients, CTL and CTL precursor cell frequencies decreased as the clinical and immunologic status of the patients deteriorated. Most surprisingly, PBL from seronegative donors also responded to HIV stimulation in vitro and generated large numbers of HLA-restricted, HIV-specific CTL.

  14. Maltodextrin-based imaging probes detect bacteria in vivo with high sensitivity and specificity

    NASA Astrophysics Data System (ADS)

    Ning, Xinghai; Lee, Seungjun; Wang, Zhirui; Kim, Dongin; Stubblefield, Bryan; Gilbert, Eric; Murthy, Niren

    2011-08-01

    The diagnosis of bacterial infections remains a major challenge in medicine. Although numerous contrast agents have been developed to image bacteria, their clinical impact has been minimal because they are unable to detect small numbers of bacteria in vivo, and cannot distinguish infections from other pathologies such as cancer and inflammation. Here, we present a family of contrast agents, termed maltodextrin-based imaging probes (MDPs), which can detect bacteria in vivo with a sensitivity two orders of magnitude higher than previously reported, and can detect bacteria using a bacteria-specific mechanism that is independent of host response and secondary pathologies. MDPs are composed of a fluorescent dye conjugated to maltohexaose, and are rapidly internalized through the bacteria-specific maltodextrin transport pathway, endowing the MDPs with a unique combination of high sensitivity and specificity for bacteria. Here, we show that MDPs selectively accumulate within bacteria at millimolar concentrations, and are a thousand-fold more specific for bacteria than mammalian cells. Furthermore, we demonstrate that MDPs can image as few as 105 colony-forming units in vivo and can discriminate between active bacteria and inflammation induced by either lipopolysaccharides or metabolically inactive bacteria.

  15. Quantifying domain-ligand affinities and specificities by high-throughput holdup assay

    PubMed Central

    Vincentelli, Renaud; Luck, Katja; Poirson, Juline; Polanowska, Jolanta; Abdat, Julie; Blémont, Marilyne; Turchetto, Jeremy; Iv, François; Ricquier, Kevin; Straub, Marie-Laure; Forster, Anne; Cassonnet, Patricia; Borg, Jean-Paul; Jacob, Yves; Masson, Murielle; Nominé, Yves; Reboul, Jérôme; Wolff, Nicolas; Charbonnier, Sebastian; Travé, Gilles

    2015-01-01

    Many protein interactions are mediated by small linear motifs interacting specifically with defined families of globular domains. Quantifying the specificity of a motif requires measuring and comparing its binding affinities to all its putative target domains. To this aim, we developed the high-throughput holdup assay, a chromatographic approach that can measure up to a thousand domain-motif equilibrium binding affinities per day. Extracts of overexpressed domains are incubated with peptide-coated resins and subjected to filtration. Binding affinities are deduced from microfluidic capillary electrophoresis of flow-throughs. After benchmarking the approach on 210 PDZ-peptide pairs with known affinities, we determined the affinities of two viral PDZ-binding motifs derived from Human Papillomavirus E6 oncoproteins for 209 PDZ domains covering 79% of the human PDZome. We obtained exquisite sequence-dependent binding profiles, describing quantitatively the PDZome recognition specificity of each motif. This approach, applicable to many categories of domain-ligand interactions, has a wide potential for quantifying the specificities of interactomes. PMID:26053890

  16. A 3D High-Order Unstructured Finite-Volume Algorithm for Solving Maxwell's Equations

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Kwak, Dochan (Technical Monitor)

    1995-01-01

    A three-dimensional finite-volume algorithm based on arbitrary basis functions for time-dependent problems on general unstructured grids is developed. The method is applied to the time-domain Maxwell equations. Discrete unknowns are volume integrals or cell averages of the electric and magnetic field variables. Spatial terms are converted to surface integrals using the Gauss curl theorem. Polynomial basis functions are introduced in constructing local representations of the fields and evaluating the volume and surface integrals. Electric and magnetic fields are approximated by linear combinations of these basis functions. Unlike other unstructured formulations used in Computational Fluid Dynamics, the new formulation actually does not reconstruct the field variables at each time step. Instead, the spatial terms are calculated in terms of unknowns by precomputing weights at the beginning of the computation as functions of cell geometry and basis functions to retain efficiency. Since no assumption is made for cell geometry, this new formulation is suitable for arbitrarily defined grids, either smooth or unsmooth. However, to facilitate the volume and surface integrations, arbitrary polyhedral cells with polygonal faces are used in constructing grids. Both centered and upwind schemes are formulated. It is shown that conventional schemes (second order in Cartesian grids) are equivalent to the new schemes using first degree polynomials as the basis functions and the midpoint quadrature for the integrations. In the new formulation, higher orders of accuracy are achieved by using higher degree polynomial basis functions. Furthermore, all the surface and volume integrations are carried out exactly. Several model electromagnetic scattering problems are calculated and compared with analytical solutions. Examples are given for cases based on 0th to 3rd degree polynomial basis functions. In all calculations, a centered scheme is applied in the interior, while an upwind

  17. High-Resolution Specificity from DNA Sequencing Highlights Alternative Modes of Lac Repressor Binding

    PubMed Central

    Zuo, Zheng; Stormo, Gary D.

    2014-01-01

    Knowing the specificity of transcription factors is critical to understanding regulatory networks in cells. The lac repressor–operator system has been studied for many years, but not with high-throughput methods capable of determining specificity comprehensively. Details of its binding interaction and its selection of an asymmetric binding site have been controversial. We employed a new method to accurately determine relative binding affinities to thousands of sequences simultaneously, requiring only sequencing of bound and unbound fractions. An analysis of 2560 different DNA sequence variants, including both base changes and variations in operator length, provides a detailed view of lac repressor sequence specificity. We find that the protein can bind with nearly equal affinities to operators of three different lengths, but the sequence preference changes depending on the length, demonstrating alternative modes of interaction between the protein and DNA. The wild-type operator has an odd length, causing the two monomers to bind in alternative modes, making the asymmetric operator the preferred binding site. We tested two other members of the LacI/GalR protein family and find that neither can bind with high affinity to sites with alternative lengths or shows evidence of alternative binding modes. A further comparison with known and predicted motifs suggests that the lac repressor may be unique in this ability and that this may contribute to its selection. PMID:25209146

  18. How to produce high specific activity tin-117m using alpha particle beam.

    PubMed

    Duchemin, C; Essayan, M; Guertin, A; Haddad, F; Michel, N; Métivier, V

    2016-09-01

    Tin-117m is an interesting radionuclide for both diagnosis and therapy, thanks to the gamma-ray and electron emissions, respectively, resulting from its decay to tin-117g. The high specific activity of tin-117m is required in many medical applications, and it can be obtained using a high energy alpha particle beam and a cadmium target. The experiments performed at the ARRONAX cyclotron (Nantes, France) using an alpha particle beam delivered at 67.4MeV provide a measurement of the excitation function of the Cd-nat(α,x)Sn-117m reaction and the produced contaminants. The Cd-116(α,3n)Sn-117m production cross section has been deduced from these experimental results using natural cadmium. Both production yield and specific activity as a function of the projectile energy have been calculated. These informations help to optimize the irradiation conditions to produce tin-117m with the required specific activity using α particles with a cadmium target.

  19. Analytical evaluation of the impact of broad specification fuels on high bypass turbofan engine combustors

    NASA Technical Reports Server (NTRS)

    Taylor, J. R.

    1979-01-01

    Six conceptual combustor designs for the CF6-50 high bypass turbofan engine and six conceptual combustor designs for the NASA/GE E3 high bypass turbofan engine were analyzed to provide an assessment of the major problems anticipated in using broad specification fuels in these aircraft engine combustion systems. Each of the conceptual combustor designs, which are representative of both state-of-the-art and advanced state-of-the-art combustion systems, was analyzed to estimate combustor performance, durability, and pollutant emissions when using commercial Jet A aviation fuel and when using experimental referee board specification fuel. Results indicate that lean burning, low emissions double annular combustor concepts can accommodate a wide range of fuel properties without a serious deterioration of performance or durability. However, rich burning, single annular concepts would be less tolerant to a relaxation of fuel properties. As the fuel specifications are relaxed, autoignition delay time becomes much smaller which presents a serious design and development problem for premixing-prevaporizing combustion system concepts.

  20. Microarrays for high-throughput genotyping of MICA alleles using allele-specific primer extension.

    PubMed

    Baek, I C; Jang, J-P; Choi, H-B; Choi, E-J; Ko, W-Y; Kim, T-G

    2013-10-01

    The role of major histocompatibility complex (MHC) class I chain-related gene A (MICA), a ligand of NKG2D, has been defined in human diseases by its allele associations with various autoimmune diseases, hematopoietic stem cell transplantation (HSCT) and cancer. This study describes a practical system to develop MICA genotyping by allele-specific primer extension (ASPE) on microarrays. From the results of 20 control primers, strict and reliable cut-off values of more than 30,000 mean fluorescence intensity (MFI) as positive and less than 3000 MFI as negative, were applied to select high-quality specific extension primers. Among 55 allele-specific primers, 44 primers could be initially selected as optimal primer. Through adjusting the length, six primers were improved. The other failed five primers were corrected by refractory modification. MICA genotypes by ASPE on microarrays showed the same results as those by nucleotide sequencing. On the basis of these results, ASPE on microarrays may provide high-throughput genotyping for MICA alleles for population studies, disease-gene associations and HSCT.

  1. Preparation and quantification of 3'-phosphoadenosine 5'-phospho(35S)sulfate with high specific activity

    SciTech Connect

    Vargas, F.

    1988-07-01

    The synthesis and quantitation of the sulfate donor 3'-phosphoadenosine 5'-phospho(35S)sulfate (PAP35S), prepared from inorganic (35S)sulfate and ATP, were studied. An enzymatic transfer method based upon the quantitative transfer of (35S)sulfate from PAP35S to 2-naphthol and 4-methylumbelliferone by the action of phenolsulfotransferase activity from rat brain cytosol was also developed. The 2-naphthyl(35S)sulfate or 35S-methylumbelliferone sulfate formed was isolated by polystyrene bead chromatography. This method allows the detection of between 0.1 pmol and 1 nmol/ml of PAP35S. PAP35S of high specific activity (75 Ci/mmol) was prepared by incubating ATP and carrier-free Na2 35SO4 with a 100,000g supernatant fraction from rat spleen. The product was purified by ion-exchange chromatography. The specific activity and purity of PAP35S were estimated by examining the ratios of Km values for PAP35S of the tyrosyl protein sulfotransferase present in microsomes from rat cerebral cortex. The advantage and applications of these methods for the detection of femtomole amounts, and the synthesis of large scale quantities of PAP35S with high specific activity are discussed.

  2. High spectral specificity of local chemical components characterization with multichannel shift-excitation Raman spectroscopy

    PubMed Central

    Chen, Kun; Wu, Tao; Wei, Haoyun; Wu, Xuejian; Li, Yan

    2015-01-01

    Raman spectroscopy has emerged as a promising tool for its noninvasive and nondestructive characterization of local chemical structures. However, spectrally overlapping components prevent the specific identification of hyperfine molecular information of different substances, because of limitations in the spectral resolving power. The challenge is to find a way of preserving scattered photons and retrieving hidden/buried Raman signatures to take full advantage of its chemical specificity. Here, we demonstrate a multichannel acquisition framework based on shift-excitation and slit-modulation, followed by mathematical post-processing, which enables a significant improvement in the spectral specificity of Raman characterization. The present technique, termed shift-excitation blind super-resolution Raman spectroscopy (SEBSR), uses multiple degraded spectra to beat the dispersion-loss trade-off and facilitate high-resolution applications. It overcomes a fundamental problem that has previously plagued high-resolution Raman spectroscopy: fine spectral resolution requires large dispersion, which is accompanied by extreme optical loss. Applicability is demonstrated by the perfect recovery of fine structure of the C-Cl bending mode as well as the clear discrimination of different polymorphs of mannitol. Due to its enhanced discrimination capability, this method offers a feasible route at encouraging a broader range of applications in analytical chemistry, materials and biomedicine. PMID:26350355

  3. Design and component specifications for high average power laser optical systems

    SciTech Connect

    O'Neil, R.W.; Sawicki, R.H.; Johnson, S.A.; Sweatt, W.C.

    1987-01-01

    Laser imaging and transport systems are considered in the regime where laser-induced damage and/or thermal distortion have significant design implications. System design and component specifications are discussed and quantified in terms of the net system transport efficiency and phase budget. Optical substrate materials, figure, surface roughness, coatings, and sizing are considered in the context of visible and near-ir optical systems that have been developed at Lawrence Livermore National Laboratory for laser isotope separation applications. In specific examples of general applicability, details of the bulk and/or surface absorption, peak and/or average power damage threshold, coating characteristics and function, substrate properties, or environmental factors will be shown to drive the component size, placement, and shape in high-power systems. To avoid overstressing commercial fabrication capabilities or component design specifications, procedures will be discussed for compensating for aberration buildup, using a few carefully placed adjustable mirrors. By coupling an aggressive measurements program on substrates and coatings to the design effort, an effective technique has been established to project high-power system performance realistically and, in the process, drive technology developments to improve performance or lower cost in large-scale laser optical systems. 13 refs.

  4. The effects of varying resistance-training loads on intermediate- and high-velocity-specific adaptations.

    PubMed

    Jones, K; Bishop, P; Hunter, G; Fleisig, G

    2001-08-01

    The purpose of this study was to compare changes in velocity-specific adaptations in moderately resistance-trained athletes who trained with either low or high resistances. The study used tests of sport-specific skills across an intermediate- to high-velocity spectrum. Thirty NCAA Division I baseball players were randomly assigned to either a low-resistance (40-60% 1 repetition maximum [1RM]) training group or a high-resistance (70-90% 1RM) training group. Both of the training groups intended to maximallv accelerate each repetition during the concentric phase (IMCA). The 10 weeks of training consisted of 4 training sessions a week using basic core exercises. Peak force, velocity, and power were evaluated during set angle and depth jumps as well as weighted jumps using 30 and 50% 1RM. Squat 1RMs were also tested. Although no interactions for any of the jump tests were found, trends supported the hypothesis of velocity-specific training. Percentage gains suggest that the combined use of heavier training loads (70-90% 1RM) and IMCA tend to increase peak force in the lower-body leg and hip extensors. Trends also show that the combined use of lighter training loads (40-60% 1RM) and IMCA tend to increase peak power and peak velocity in the lower-body leg and hip extensors. The high-resistance group improved squats more than the low-resistance group (p < 0.05; +22.7 vs. + 16.1 kg). The results of this study support the use of a combination of heavier training loads and IMCA to increase 1RM strength in the lower bodies of resistance-trained athletes.

  5. Segmentation of cervical cell nuclei in high-resolution microscopic images: A new algorithm and a web-based software framework.

    PubMed

    Bergmeir, Christoph; García Silvente, Miguel; Benítez, José Manuel

    2012-09-01

    In order to automate cervical cancer screening tests, one of the most important and longstanding challenges is the segmentation of cell nuclei in the stained specimens. Though nuclei of isolated cells in high-quality acquisitions often are easy to segment, the problem lies in the segmentation of large numbers of nuclei with various characteristics under differing acquisition conditions in high-resolution scans of the complete microscope slides. We implemented a system that enables processing of full resolution images, and proposes a new algorithm for segmenting the nuclei under adequate control of the expert user. The system can work automatically or interactively guided, to allow for segmentation within the whole range of slide and image characteristics. It facilitates data storage and interaction of technical and medical experts, especially with its web-based architecture. The proposed algorithm localizes cell nuclei using a voting scheme and prior knowledge, before it determines the exact shape of the nuclei by means of an elastic segmentation algorithm. After noise removal with a mean-shift and a median filtering takes place, edges are extracted with a Canny edge detection algorithm. Motivated by the observation that cell nuclei are surrounded by cytoplasm and their shape is roughly elliptical, edges adjacent to the background are removed. A randomized Hough transform for ellipses finds candidate nuclei, which are then processed by a level set algorithm. The algorithm is tested and compared to other algorithms on a database containing 207 images acquired from two different microscope slides, with promising results.

  6. Estimation of sediment transport with an in-situ acoustic retrieval algorithm in the high-turbidity Changjiang Estuary, China

    NASA Astrophysics Data System (ADS)

    Ge, Jian-zhong; Ding, Ping-xing; Li, Cheng; Fan, Zhong-ya; Shen, Fang; Kong, Ya-zhen

    2015-12-01

    A comprehensive acoustic retrieval algorithm to investigate suspended sediment is presented with the combined validations of Acoustic Doppler Current Profiler (ADCP) and Optical Backscattering Sensor (OBS) monitoring along seven cross-channel sections in the high-turbidity North Passage of the Changjiang Estuary, China. The realistic water conditions, horizontal and vertical salinities, and grain size of the suspended sediment are considered in the retrieval algorithm. Relations between net volume scattering of sound attenuation ( S v ) due to sediments and ADCP echo intensity ( E) were obtained with reasonable accuracy after applying the linear regression method. In the river mouth, an intensive vertical stratification and horizontal inhomogeneity were found, with a higher concentration of sediment in the North Passage and a lower concentration in the North Channel and South Passage. Additionally, The North Passage is characterized by higher sediment concentration in the middle region and lower concentration in the entrance and outlet areas. The maximum sediment flux rate, occurred in the middle region, could reach 6.3×105 and 1.5×105 t/h during the spring and neap tide, respectively. Retrieved sediment fluxes in the middle region are significantly larger than that in the upstream and downstream region. This strong sediment imbalance along the main channel indicates potential secondary sediment supply from southern Jiuduansha Shoals.

  7. Selection of DNA aptamers against epidermal growth factor receptor with high affinity and specificity

    SciTech Connect

    Wang, Deng-Liang; Song, Yan-Ling; Zhu, Zhi; Li, Xi-Lan; Zou, Yuan; Yang, Hai-Tao; Wang, Jiang-Jie; Yao, Pei-Sen; Pan, Ru-Jun; Yang, Chaoyong James; Kang, De-Zhi

    2014-10-31

    Highlights: • This is the first report of DNA aptamer against EGFR in vitro. • Aptamer can bind targets with high affinity and selectivity. • DNA aptamers are more stable, cheap and efficient than RNA aptamers. • Our selected DNA aptamer against EGFR has high affinity with K{sub d} 56 ± 7.3 nM. • Our selected DNA aptamer against EGFR has high selectivity. - Abstract: Epidermal growth factor receptor (EGFR/HER1/c-ErbB1), is overexpressed in many solid cancers, such as epidermoid carcinomas, malignant gliomas, etc. EGFR plays roles in proliferation, invasion, angiogenesis and metastasis of malignant cancer cells and is the ideal antigen for clinical applications in cancer detection, imaging and therapy. Aptamers, the output of the systematic evolution of ligands by exponential enrichment (SELEX), are DNA/RNA oligonucleotides which can bind protein and other substances with specificity. RNA aptamers are undesirable due to their instability and high cost of production. Conversely, DNA aptamers have aroused researcher’s attention because they are easily synthesized, stable, selective, have high binding affinity and are cost-effective to produce. In this study, we have successfully identified DNA aptamers with high binding affinity and selectivity to EGFR. The aptamer named TuTu22 with K{sub d} 56 ± 7.3 nM was chosen from the identified DNA aptamers for further study. Flow cytometry analysis results indicated that the TuTu22 aptamer was able to specifically recognize a variety of cancer cells expressing EGFR but did not bind to the EGFR-negative cells. With all of the aforementioned advantages, the DNA aptamers reported here against cancer biomarker EGFR will facilitate the development of novel targeted cancer detection, imaging and therapy.

  8. Specific heat and sound velocity at the relevant competing phase of high-temperature superconductors

    PubMed Central

    Varma, Chandra M.; Zhu, Lijun

    2015-01-01

    Recent highly accurate sound velocity measurements reveal a phase transition to a competing phase in YBa2Cu3O6+δ that is not identified in available specific heat measurements. We show that this signature is consistent with the universality class of the loop current-ordered state when the free-energy reduction is similar to the superconducting condensation energy, due to the anomalous fluctuation region of such a transition. We also compare the measured specific heat with some usual types of transitions, which are observed at lower temperatures in some cuprates, and find that the upper limit of the energy reduction due to them is about 1/40th the superconducting condensation energy. PMID:25941376

  9. Specific heat and sound velocity at the relevant competing phase of high-temperature superconductors.

    PubMed

    Varma, Chandra M; Zhu, Lijun

    2015-05-19

    Recent highly accurate sound velocity measurements reveal a phase transition to a competing phase in YBa2Cu3O6+δ that is not identified in available specific heat measurements. We show that this signature is consistent with the universality class of the loop current-ordered state when the free-energy reduction is similar to the superconducting condensation energy, due to the anomalous fluctuation region of such a transition. We also compare the measured specific heat with some usual types of transitions, which are observed at lower temperatures in some cuprates, and find that the upper limit of the energy reduction due to them is about 1/40th the superconducting condensation energy. PMID:25941376

  10. Detection of pork adulteration by highly-specific PCR assay of mitochondrial D-loop.

    PubMed

    Karabasanavar, Nagappa S; Singh, S P; Kumar, Deepak; Shebannavar, Sunil N

    2014-02-15

    We describe a highly specific PCR assay for the authentic identification of pork. Accurate detection of tissues derived from pig (Sus scrofa) was accomplished by using newly designed primers targeting porcine mitochondrial displacement (D-loop) region that yielded an unique amplicon of 712 base pairs (bp). Possibility of cross-amplification was precluded by testing as many as 24 animal species (mammals, birds, rodent and fish). Suitability of PCR assay was confirmed in raw (n = 20), cooked (60, 80 and 100 °C), autoclaved (121 °C) and micro-oven processed pork. Sensitivity of detection of pork in other species meat using unique pig-specific PCR was established to be at 0.1%; limit of detection (LOD) of pig DNA was 10 pg (pico grams). The technique can be used for the authentication of raw, processed and adulterated pork and products under the circumstances of food adulteration related disputes or forensic detection of origin of pig species.

  11. Highly parallel assays of tissue-specific enhancers in whole Drosophila embryos

    PubMed Central

    Gisselbrecht, Stephen S.; Barrera, Luis A.; Porsch, Martin; Aboukhalil, Anton; Estep, Preston W.; Vedenko, Anastasia; Palagi, Alexandre; Kim, Yongsok; Zhu, Xianmin; Busser, Brian W.; Gamble, Caitlin E.; Iagovitina, Antonina; Singhania, Aditi; Michelson, Alan M.; Bulyk, Martha L.

    2013-01-01

    Transcriptional enhancers are a primary mechanism by which tissue-specific gene expression is achieved. Despite the importance of these regulatory elements in development, responses to environmental stresses, and disease, testing enhancer activity in animals remains tedious, with a minority of enhancers having been characterized. Here, we have developed ‘enhancer-FACS-Seq’ (eFS) technology for highly parallel identification of active, tissue-specific enhancers in Drosophila embryos. Analysis of enhancers identified by eFS to be active in mesodermal tissues revealed enriched DNA binding site motifs of known and putative, novel mesodermal transcription factors (TFs). Naïve Bayes classifiers using TF binding site motifs accurately predicted mesodermal enhancer activity. Application of eFS to other cell types and organisms should accelerate the cataloging of enhancers and understanding how transcriptional regulation is encoded within them. PMID:23852450

  12. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  13. Whole Cell-SELEX Aptamers for Highly Specific Fluorescence Molecular Imaging of Carcinomas In Vivo

    PubMed Central

    He, Xiaoxiao; Guo, Qiuping; Wang, Kemin; Ye, Xiaosheng; Tang, Jinlu

    2013-01-01

    Background Carcinomas make up the majority of cancers. Their accurate and specific diagnoses are of great significance for the improvement of patients' curability. Methodology/Principal Findings In this paper, we report an effectual example of the in vivo fluorescence molecular imaging of carcinomas with extremely high specificity based on whole cell-SELEX aptamers. Firstly, S6, an aptamer against A549 lung carcinoma cells, was adopted and labeled with Cy5 to serve as a molecular imaging probe. Flow cytometry assays revealed that Cy5-S6 could not only specifically label in vitro cultured A549 cells in buffer, but also successfully achieve the detection of ex vivo cultured target cells in serum. When applied to in vivo imaging, Cy5-S6 was demonstrated to possess high specificity in identifying A549 carcinoma through a systematic comparison investigation. Particularly, after Cy5-S6 was intravenously injected into nude mice which were simultaneously grafted with A549 lung carcinoma and Tca8113 tongue carcinoma, a much longer retention time of Cy5-S6 in A549 tumor was observed and a clear targeted cancer imaging result was presented. On this basis, to further promote the application to imaging other carcinomas, LS2 and ZY8, which are two aptamers selected by our group against Bel-7404 and SMMC-7721 liver carcinoma cells respectively, were tested in a similar way, both in vitro and in vivo. Results showed that these aptamers were even effective in differentiating liver carcinomas of different subtypes in the same body. Conclusions/Significance This work might greatly advance the application of whole cell-SELEX aptamers to carcinomas-related in vivo researches. PMID:23950940

  14. Polarity-specific high-level information propagation in neural networks.

    PubMed

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  15. Endophytic Fungal Communities Associated with Vascular Plants in the High Arctic Zone Are Highly Diverse and Host-Plant Specific.

    PubMed

    Zhang, Tao; Yao, Yi-Feng

    2015-01-01

    This study assessed the diversity and distribution of endophytic fungal communities associated with the leaves and stems of four vascular plant species in the High Arctic using 454 pyrosequencing with fungal-specific primers targeting the ITS region. Endophytic fungal communities showed high diversity. The 76,691 sequences obtained belonged to 250 operational taxonomic units (OTUs). Of these OTUs, 190 belonged to Ascomycota, 50 to Basidiomycota, 1 to Chytridiomycota, and 9 to unknown fungi. The dominant orders were Helotiales, Pleosporales, Capnodiales, and Tremellales, whereas the common known fungal genera were Cryptococcus, Rhizosphaera, Mycopappus, Melampsora, Tetracladium, Phaeosphaeria, Mrakia, Venturia, and Leptosphaeria. Both the climate and host-related factors might shape the fungal communities associated with the four Arctic plant species in this region. These results suggested the presence of an interesting endophytic fungal community and could improve our understanding of fungal evolution and ecology in the Arctic terrestrial ecosystems. PMID:26067836

  16. Endophytic Fungal Communities Associated with Vascular Plants in the High Arctic Zone Are Highly Diverse and Host-Plant Specific.

    PubMed

    Zhang, Tao; Yao, Yi-Feng

    2015-01-01

    This study assessed the diversity and distribution of endophytic fungal communities associated with the leaves and stems of four vascular plant species in the High Arctic using 454 pyrosequencing with fungal-specific primers targeting the ITS region. Endophytic fungal communities showed high diversity. The 76,691 sequences obtained belonged to 250 operational taxonomic units (OTUs). Of these OTUs, 190 belonged to Ascomycota, 50 to Basidiomycota, 1 to Chytridiomycota, and 9 to unknown fungi. The dominant orders were Helotiales, Pleosporales, Capnodiales, and Tremellales, whereas the common known fungal genera were Cryptococcus, Rhizosphaera, Mycopappus, Melampsora, Tetracladium, Phaeosphaeria, Mrakia, Venturia, and Leptosphaeria. Both the climate and host-related factors might shape the fungal communities associated with the four Arctic plant species in this region. These results suggested the presence of an interesting endophytic fungal community and could improve our understanding of fungal evolution and ecology in the Arctic terrestrial ecosystems.

  17. Endophytic Fungal Communities Associated with Vascular Plants in the High Arctic Zone Are Highly Diverse and Host-Plant Specific

    PubMed Central

    Zhang, Tao; Yao, Yi-Feng

    2015-01-01

    This study assessed the diversity and distribution of endophytic fungal communities associated with the leaves and stems of four vascular plant species in the High Arctic using 454 pyrosequencing with fungal-specific primers targeting the ITS region. Endophytic fungal communities showed high diversity. The 76,691 sequences obtained belonged to 250 operational taxonomic units (OTUs). Of these OTUs, 190 belonged to Ascomycota, 50 to Basidiomycota, 1 to Chytridiomycota, and 9 to unknown fungi. The dominant orders were Helotiales, Pleosporales, Capnodiales, and Tremellales, whereas the common known fungal genera were Cryptococcus, Rhizosphaera, Mycopappus, Melampsora, Tetracladium, Phaeosphaeria, Mrakia, Venturia, and Leptosphaeria. Both the climate and host-related factors might shape the fungal communities associated with the four Arctic plant species in this region. These results suggested the presence of an interesting endophytic fungal community and could improve our understanding of fungal evolution and ecology in the Arctic terrestrial ecosystems. PMID:26067836

  18. Understanding Strongly Correlated Materials thru Theory Algorithms and High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kotliar, Gabriel

    A long standing challenge in condensed matter physics is the prediction of physical properties of materials starting from first principles. In the past two decades, substantial advances have taken place in this area. The combination of modern implementations of electronic structure methods in conjunction with Dynamical Mean Field Theory (DMFT), in combination with advanced impurity solvers, modern computer codes and massively parallel computers, are giving new system specific insights into the properties of strongly correlated electron systems enable the calculations of experimentally measurable correlation functions. The predictions of this ''theoretical spectroscopy'' can be directly compared with experimental results. In this talk I will briefly outline the state of the art of the methodology, and illustrate it with an example the origin of the solid state anomalies of elemental Plutonium.

  19. Dimeric CRISPR RNA-guided FokI nucleases for highly specific genome editing.

    PubMed

    Tsai, Shengdar Q; Wyvekens, Nicolas; Khayter, Cyd; Foden, Jennifer A; Thapar, Vishal; Reyon, Deepak; Goodwin, Mathew J; Aryee, Martin J; Joung, J Keith

    2014-06-01

    Monomeric CRISPR-Cas9 nucleases are widely used for targeted genome editing but can induce unwanted off-target mutations with high frequencies. Here we describe dimeric RNA-guided FokI nucleases (RFNs) that can recognize extended sequences and edit endogenous genes with high efficiencies in human cells. RFN cleavage activity depends strictly on the binding of two guide RNAs (gRNAs) to DNA with a defined spacing and orientation substantially reducing the likelihood that a suitable target site will occur more than once in the genome and therefore improving specificities relative to wild-type Cas9 monomers. RFNs guided by a single gRNA generally induce lower levels of unwanted mutations than matched monomeric Cas9 nickases. In addition, we describe a simple method for expressing multiple gRNAs bearing any 5' end nucleotide, which gives dimeric RFNs a broad targeting range. RFNs combine the ease of RNA-based targeting with the specificity enhancement inherent to dimerization and are likely to be useful in applications that require highly precise genome editing.

  20. Dimeric CRISPR RNA-guided FokI nucleases for highly specific genome editing.

    PubMed

    Tsai, Shengdar Q; Wyvekens, Nicolas; Khayter, Cyd; Foden, Jennifer A; Thapar, Vishal; Reyon, Deepak; Goodwin, Mathew J; Aryee, Martin J; Joung, J Keith

    2014-06-01

    Monomeric CRISPR-Cas9 nucleases are widely used for targeted genome editing but can induce unwanted off-target mutations with high frequencies. Here we describe dimeric RNA-guided FokI nucleases (RFNs) that can recognize extended sequences and edit endogenous genes with high efficiencies in human cells. RFN cleavage activity depends strictly on the binding of two guide RNAs (gRNAs) to DNA with a defined spacing and orientation substantially reducing the likelihood that a suitable target site will occur more than once in the genome and therefore improving specificities relative to wild-type Cas9 monomers. RFNs guided by a single gRNA generally induce lower levels of unwanted mutations than matched monomeric Cas9 nickases. In addition, we describe a simple method for expressing multiple gRNAs bearing any 5' end nucleotide, which gives dimeric RFNs a broad targeting range. RFNs combine the ease of RNA-based targeting with the specificity enhancement inherent to dimerization and are likely to be useful in applications that require highly precise genome editing. PMID:24770325

  1. Selection of DNA aptamers against epidermal growth factor receptor with high affinity and specificity.

    PubMed

    Wang, Deng-Liang; Song, Yan-Ling; Zhu, Zhi; Li, Xi-Lan; Zou, Yuan; Yang, Hai-Tao; Wang, Jiang-Jie; Yao, Pei-Sen; Pan, Ru-Jun; Yang, Chaoyong James; Kang, De-Zhi

    2014-10-31

    Epidermal growth factor receptor (EGFR/HER1/c-ErbB1), is overexpressed in many solid cancers, such as epidermoid carcinomas, malignant gliomas, etc. EGFR plays roles in proliferation, invasion, angiogenesis and metastasis of malignant cancer cells and is the ideal antigen for clinical applications in cancer detection, imaging and therapy. Aptamers, the output of the systematic evolution of ligands by exponential enrichment (SELEX), are DNA/RNA oligonucleotides which can bind protein and other substances with specificity. RNA aptamers are undesirable due to their instability and high cost of production. Conversely, DNA aptamers have aroused researcher's attention because they are easily synthesized, stable, selective, have high binding affinity and are cost-effective to produce. In this study, we have successfully identified DNA aptamers with high binding affinity and selectivity to EGFR. The aptamer named TuTu22 with Kd 56±7.3nM was chosen from the identified DNA aptamers for further study. Flow cytometry analysis results indicated that the TuTu22 aptamer was able to specifically recognize a variety of cancer cells expressing EGFR but did not bind to the EGFR-negative cells. With all of the aforementioned advantages, the DNA aptamers reported here against cancer biomarker EGFR will facilitate the development of novel targeted cancer detection, imaging and therapy.

  2. Advanced image reconstruction and visualization algorithms for CERN ALICE high energy physics experiment

    NASA Astrophysics Data System (ADS)

    Myrcha, Julian; Rokita, Przemysław

    2015-09-01

    Visual data stereoscopic 3D reconstruction is an important challenge in the LHC ALICE detector experiment. Stereoscopic visualization of 3D data is also an important subject of photonics in general. In this paper we have proposed several solutions enabling effective perception based stereoscopic visualization of data provided by detectors in high energy physics experiments.

  3. Tsunami Detection by High-Frequency Radar Beyond the Continental Shelf - I: Algorithms and Validation on Idealized Case Studies

    NASA Astrophysics Data System (ADS)

    Grilli, Stéphan T.; Grosdidier, Samuel; Guérin, Charles-Antoine

    2015-10-01

    Where coastal tsunami hazard is governed by near-field sources, such as submarine mass failures or meteo-tsunamis, tsunami propagation times may be too small for a detection based on deep or shallow water buoys. To offer sufficient warning time, it has been proposed to implement early warning systems relying on high-frequency (HF) radar remote sensing, that can provide a dense spatial coverage as far offshore as 200-300 km (e.g., for Diginext Ltd.'s Stradivarius radar). Shore-based HF radars have been used to measure nearshore currents (e.g., CODAR SeaSonde® system; http://www.codar.com/ ), by inverting the Doppler spectral shifts, these cause on ocean waves at the Bragg frequency. Both modeling work and an analysis of radar data following the Tohoku 2011 tsunami, have shown that, given proper detection algorithms, such radars could be used to detect tsunami-induced currents and issue a warning. However, long wave physics is such that tsunami currents will only rise above noise and background currents (i.e., be at least 10-15 cm/s), and become detectable, in fairly shallow water which would limit the direct detection of tsunami currents by HF radar to nearshore areas, unless there is a very wide shallow shelf. Here, we use numerical simulations of both HF radar remote sensing and tsunami propagation to develop and validate a new type of tsunami detection algorithm that does not have these limitations. To simulate the radar backscattered signal, we develop a numerical model including second-order effects in both wind waves and radar signal, with the wave angular frequency being modulated by a time-varying surface current, combining tsunami and background currents. In each "radar cell", the model represents wind waves with random phases and amplitudes extracted from a specified (wind speed dependent) energy density frequency spectrum, and includes effects

  4. Highly Specific and Broadly Potent Inhibitors of Mammalian Secreted Phospholipases A2

    PubMed Central

    Oslund, Rob C.; Cermak, Nathan; Gelb, Michael H.

    2010-01-01

    We report a series of inhibitors of secreted phospholipases A2 (sPLA2s) based on substituted indoles, 6,7-benzoindoles, and indolizines derived from LY315920, a well-known indole-based sPLA2 inhibitor. Using the human group X sPLA2 crystal structure, we prepared a highly potent and selective indole-based inhibitor of this enzyme. Also, we report human and mouse group IIA and IIE specific inhibitors and a substituted 6,7-benzoindole that inhibits nearly all human and mouse sPLA2s in the low nanomolar range. PMID:18605714

  5. An accurate dynamical electron diffraction algorithm for reflection high-energy electron diffraction

    NASA Astrophysics Data System (ADS)

    Huang, J.; Cai, C. Y.; Lv, C. L.; Zhou, G. W.; Wang, Y. G.

    2015-12-01

    The conventional multislice method (CMS) method, one of the most popular dynamical electron diffraction calculation procedures in transmission electron microscopy, was introduced to calculate reflection high-energy electron diffraction (RHEED) as it is well adapted to deal with the deviations from the periodicity in the direction parallel to the surface. However, in the present work, we show that the CMS method is no longer sufficiently accurate for simulating RHEED with the accelerating voltage 3-100 kV because of the high-energy approximation. An accurate multislice (AMS) method can be an alternative for more accurate RHEED calculations with reasonable computing time. A detailed comparison of the numerical calculation of the AMS method and the CMS method is carried out with respect to different accelerating voltages, surface structure models, Debye-Waller factors and glancing angles.

  6. Test and evaluation of the HIDEC engine uptrim algorithm. [Highly Integrated Digital Electronic Control for aircraft

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Myers, L. P.

    1986-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemente into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.

  7. A Synchronization Algorithm and Implementation for High-Speed Block Codes Applications. Part 4

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Zhang, Yu; Nakamura, Eric B.; Uehara, Gregory T.

    1998-01-01

    Block codes have trellis structures and decoders amenable to high speed CMOS VLSI implementation. For a given CMOS technology, these structures enable operating speeds higher than those achievable using convolutional codes for only modest reductions in coding gain. As a result, block codes have tremendous potential for satellite trunk and other future high-speed communication applications. This paper describes a new approach for implementation of the synchronization function for block codes. The approach utilizes the output of the Viterbi decoder and therefore employs the strength of the decoder. Its operation requires no knowledge of the signal-to-noise ratio of the received signal, has a simple implementation, adds no overhead to the transmitted data, and has been shown to be effective in simulation for received SNR greater than 2 dB.

  8. A region-based high spatial resolution remotely sensed imagery classification algorithm based on multiscale fusion and feature weighting

    NASA Astrophysics Data System (ADS)

    Wang, Leiguang; Mei, Tiancan; Qin, Qianqin

    2009-10-01

    With the availability of high resolution multispectral imagery from sensors, it is possible to identify small-scale features in urban environment. Given attributes of image structure such as color, texture, have the character of highly scale dependency, a hierarchy segment fusion algorithm based on region deviation is proposed to extract more robust features and benefit single semantic level land cover classification. The fusion algorithm proposed is divided into in two successive sub-tasks: mean shift (MS) filtering based pre-segmentation and hierarchical segment optimization. Presegmentation is applied to get boundary- preserved and spectrally homogeneous initial regions, and then, a family of nested image partitions with ascending region areas is constructed by iteratively merging procedure. In every scale, regions of the corresponding critical size are evaluated according to potential region merge risk, which is measured by the region standard deviation change before and after a virtual merge. If a region measurement is larger than a specified change threshold, the region will be preserved to the next level and labeled as a candidate segment for following regionbased classification. Otherwise the segment will be merged to the next scale level. After fusing segments in different scales, a novel weighted minimum distance classifier is employed to get supervised classification result, in which every feature band's deviation is used to calculate its own weight. We show results for classification of a HR image over Washington DC Mall area taken by the HYDICE sensor. Different features combined with designed classifier have proved that fused segments provided a robust feature extraction and improve classification accuracy.

  9. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  10. High throughput peptide mapping method for analysis of site specific monoclonal antibody oxidation.

    PubMed

    Li, Xiaojuan; Xu, Wei; Wang, Yi; Zhao, Jia; Liu, Yan-Hui; Richardson, Daisy; Li, Huijuan; Shameem, Mohammed; Yang, Xiaoyu

    2016-08-19

    Oxidation of therapeutic monoclonal antibodies (mAbs) often occurs on surface exposed methionine and tryptophan residues during their production in cell culture, purification, and storage, and can potentially impact the binding to their targets. Characterization of site specific oxidation is critical for antibody quality control. Antibody oxidation is commonly determined by peptide mapping/LC-MS methods, which normally require a long (up to 24h) digestion step. The prolonged sample preparation procedure could result in oxidation artifacts of susceptible methionine and tryptophan residues. In this paper, we developed a rapid and simple UV based peptide mapping method that incorporates an 8-min trypsin in-solution digestion protocol for analysis of oxidation. This method is able to determine oxidation levels at specific residues of a mAb based on the peptide UV traces within <1h, from either TBHP treated or UV light stressed samples. This is the simplest and fastest method reported thus far for site specific oxidation analysis, and can be applied for routine or high throughput analysis of mAb oxidation during various stability and degradation studies. By using the UV trace, the method allows more accurate measurement than mass spectrometry and can be potentially implemented as a release assay. It has been successfully used to monitor antibody oxidation in real time stability studies.

  11. Highly specific epigenome editing by CRISPR-Cas9 repressors for silencing of distal regulatory elements.

    PubMed

    Thakore, Pratiksha I; D'Ippolito, Anthony M; Song, Lingyun; Safi, Alexias; Shivakumar, Nishkala K; Kabadi, Ami M; Reddy, Timothy E; Crawford, Gregory E; Gersbach, Charles A

    2015-12-01

    Epigenome editing with the CRISPR (clustered, regularly interspaced, short palindromic repeats)-Cas9 platform is a promising technology for modulating gene expression to direct cell phenotype and to dissect the causal epigenetic mechanisms of gene regulation. Fusions of nuclease-inactive dCas9 to the Krüppel-associated box (KRAB) repressor (dCas9-KRAB) can silence target gene expression, but the genome-wide specificity and the extent of heterochromatin formation catalyzed by dCas9-KRAB are not known. We targeted dCas9-KRAB to the HS2 enhancer, a distal regulatory element that orchestrates the expression of multiple globin genes, and observed highly specific induction of H3K9 trimethylation (H3K9me3) at the enhancer and decreased chromatin accessibility of both the enhancer and its promoter targets. Targeted epigenetic modification of HS2 silenced the expression of multiple globin genes, with minimal off-target changes in global gene expression. These results demonstrate that repression mediated by dCas9-KRAB is sufficiently specific to disrupt the activity of individual enhancers via local modification of the epigenome.

  12. High throughput peptide mapping method for analysis of site specific monoclonal antibody oxidation.

    PubMed

    Li, Xiaojuan; Xu, Wei; Wang, Yi; Zhao, Jia; Liu, Yan-Hui; Richardson, Daisy; Li, Huijuan; Shameem, Mohammed; Yang, Xiaoyu

    2016-08-19

    Oxidation of therapeutic monoclonal antibodies (mAbs) often occurs on surface exposed methionine and tryptophan residues during their production in cell culture, purification, and storage, and can potentially impact the binding to their targets. Characterization of site specific oxidation is critical for antibody quality control. Antibody oxidation is commonly determined by peptide mapping/LC-MS methods, which normally require a long (up to 24h) digestion step. The prolonged sample preparation procedure could result in oxidation artifacts of susceptible methionine and tryptophan residues. In this paper, we developed a rapid and simple UV based peptide mapping method that incorporates an 8-min trypsin in-solution digestion protocol for analysis of oxidation. This method is able to determine oxidation levels at specific residues of a mAb based on the peptide UV traces within <1h, from either TBHP treated or UV light stressed samples. This is the simplest and fastest method reported thus far for site specific oxidation analysis, and can be applied for routine or high throughput analysis of mAb oxidation during various stability and degradation studies. By using the UV trace, the method allows more accurate measurement than mass spectrometry and can be potentially implemented as a release assay. It has been successfully used to monitor antibody oxidation in real time stability studies. PMID:27432793

  13. Development of high-throughput phosphorylation profiling method for identification of Ser/Thr kinase specificity.

    PubMed

    Kim, Eun-Mi; Kim, Jaehi; Kim, Yun-Gon; Lee, Peter; Shin, Dong-Sik; Kim, Mira; Hahn, Ji-Sook; Lee, Yoon-Sik; Kim, Byung-Gee

    2011-05-01

    Identification of substrate specificity of kinases is crucial to understand the roles of the kinases in cellular signal transduction pathways. Here, we present an approach applicable for the discovery of substrate specificity of Ser/Thr kinases. The method, which is named as the 'high-throughput phosphorylation profiling (HTPP)' method was developed on the basis of a fully randomized one-bead one-compound (OBOC) combinatorial ladder type peptide library and MALDI-TOF MS. The OBOC ladder peptide library was constructed by the 'split and pool' method on a HiCore resin. The peptide library sequence was Ac-Ala-X-X-X-Ser-X-X-Ala-BEBE-PLL resin. The substrate specificity of murine PKA (cAMP-dependent protein kinase A) and yeast Yak1 kinase was identified using this method. On the basis of the result, we identified Ifh1, which is a co-activator for the transcription of ribosomal protein genes, as a novel substrate of Yak1 kinase. The putative Yak1-dependent phosphorylation site of Ifh1 was verified by in vitro kinase assay.

  14. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  15. Three Recombinant Engineered Antibodies against Recombinant Tags with High Affinity and Specificity.

    PubMed

    Zhao, Hongyu; Shen, Ao; Xiang, Yang K; Corey, David P

    2016-01-01

    We describe three recombinant engineered antibodies against three recombinant epitope tags, constructed with divalent binding arms to recognize divalent epitopes and so achieve high affinity and specificity. In two versions, an epitope is inserted in tandem into a protein of interest, and a homodimeric antibody is constructed by fusing a high-affinity epitope-binding domain to a human or mouse Fc domain. In a third, a heterodimeric antibody is constructed by fusing two different epitope-binding domains which target two different binding sites in GFP, to polarized Fc fragments. These antibody/epitope pairs have affinities in the low picomolar range and are useful tools for many antibody-based applications.

  16. Establishing Specifications for Low Enriched Uranium Fuel Operations Conducted Outside the High Flux Isotope Reactor Site

    SciTech Connect

    Pinkston, Daniel; Primm, Trent; Renfro, David G; Sease, John D

    2010-10-01

    The National Nuclear Security Administration (NNSA) has funded staff at Oak Ridge National Laboratory (ORNL) to study the conversion of the High Flux Isotope Reactor (HFIR) from the current, high enriched uranium fuel to low enriched uranium fuel. The LEU fuel form is a metal alloy that has never been used in HFIR or any HFIR-like reactor. This report provides documentation of a process for the creation of a fuel specification that will meet all applicable regulations and guidelines to which UT-Battelle, LLC (UTB) the operating contractor for ORNL - must adhere. This process will allow UTB to purchase LEU fuel for HFIR and be assured of the quality of the fuel being procured.

  17. Three Recombinant Engineered Antibodies against Recombinant Tags with High Affinity and Specificity

    PubMed Central

    Zhao, Hongyu; Shen, Ao; Xiang, Yang K.; Corey, David P.

    2016-01-01

    We describe three recombinant engineered antibodies against three recombinant epitope tags, constructed with divalent binding arms to recognize divalent epitopes and so achieve high affinity and specificity. In two versions, an epitope is inserted in tandem into a protein of interest, and a homodimeric antibody is constructed by fusing a high-affinity epitope-binding domain to a human or mouse Fc domain. In a third, a heterodimeric antibody is constructed by fusing two different epitope-binding domains which target two different binding sites in GFP, to polarized Fc fragments. These antibody/epitope pairs have affinities in the low picomolar range and are useful tools for many antibody-based applications. PMID:26943906

  18. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    SciTech Connect

    Chen Ting; Kim, Sung; Goyal, Sharad; Jabbour, Salma; Zhou Jinghao; Rajagopal, Gunaretnum; Haffty, Bruce; Yue Ning

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintain the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a

  19. Standardized methods for the production of high specific-activity zirconium-89.

    PubMed

    Holland, Jason P; Sheh, Yiauchung; Lewis, Jason S

    2009-10-01

    Zirconium-89 is an attractive metallo-radionuclide for use in immuno-PET due to favorable decay characteristics. Standardized methods for the routine production and isolation of high-purity and high-specific-activity (89)Zr using a small cyclotron are reported. Optimized cyclotron conditions reveal high average yields of 1.52+/-0.11 mCi/muA.h at a proton beam energy of 15 MeV and current of 15 muA using a solid, commercially available (89)Y-foil target (0.1 mm, 100% natural abundance). (89)Zr was isolated in high radionuclidic and radiochemical purity (>99.99%) as [(89)Zr]Zr-oxalate by using a solid-phase hydroxamate resin with >99.5% recovery of the radioactivity. The effective specific-activity of (89)Zr was found to be in the range 5.28-13.43 mCi/microg (470-1195 Ci/mmol) of zirconium. New methods for the facile production of [(89)Zr]Zr-chloride are reported. Radiolabeling studies using the trihydroxamate ligand desferrioxamine B (DFO) gave 100% radiochemical yields in <15 min at room temperature, and in vitro stability measurements confirmed that [(89)Zr]Zr-DFO is stable with respect to ligand dissociation in human serum for >7 days. Small-animal positron emission tomography (PET) imaging studies have demonstrated that free (89)Zr(IV) ions administered as [(89)Zr]Zr-chloride accumulate in the liver, whilst [(89)Zr]Zr-DFO is excreted rapidly via the kidneys within <20 min. These results have important implication for the analysis of immuno-PET imaging of (89)Zr-labeled monoclonal antibodies. The detailed methods described can be easily translated to other radiochemistry facilities and will facilitate the use of (89)Zr in both basic science and clinical investigations.

  20. Isolation of a Highly Thermal Stable Lama Single Domain Antibody Specific for Staphylococcus aureus Enterotoxin B

    PubMed Central

    2011-01-01

    Background Camelids and sharks possess a unique subclass of antibodies comprised of only heavy chains. The antigen binding fragments of these unique antibodies can be cloned and expressed as single domain antibodies (sdAbs). The ability of these small antigen-binding molecules to refold after heating to achieve their original structure, as well as their diminutive size, makes them attractive candidates for diagnostic assays. Results Here we describe the isolation of an sdAb against Staphyloccocus aureus enterotoxin B (SEB). The clone, A3, was found to have high affinity (Kd = 75 pM) and good specificity for SEB, showing no cross reactivity to related molecules such as Staphylococcal enterotoxin A (SEA), Staphylococcal enterotoxin D (SED), and Shiga toxin. Most remarkably, this anti-SEB sdAb had an extremely high Tm of 85°C and an ability to refold after heating to 95°C. The sharp Tm determined by circular dichroism, was found to contrast with the gradual decrease observed in intrinsic fluorescence. We demonstrated the utility of this sdAb as a capture and detector molecule in Luminex based assays providing limits of detection (LODs) of at least 64 pg/mL. Conclusion The anti-SEB sdAb A3 was found to have a high affinity and an extraordinarily high Tm and could still refold to recover activity after heat denaturation. This combination of heat resilience and strong, specific binding make this sdAb a good candidate for use in antibody-based toxin detection technologies. PMID:21933444

  1. Domain Specific Changes in Cognition at High Altitude and Its Correlation with Hyperhomocysteinemia

    PubMed Central

    Sharma, Vijay K.; Das, Saroj K.; Dhar, Priyanka; Hota, Kalpana B.; Mahapatra, Bidhu B.; Vashishtha, Vivek; Kumar, Ashish; Hota, Sunil K.; Norboo, Tsering; Srivastava, Ravi B.

    2014-01-01

    Though acute exposure to hypobaric hypoxia is reported to impair cognitive performance, the effects of prolonged exposure on different cognitive domains have been less studied. The present study aimed at investigating the time dependent changes in cognitive performance on prolonged stay at high altitude and its correlation with electroencephalogram (EEG) and plasma homocysteine. The study was conducted on 761 male volunteers of 25–35 years age who had never been to high altitude and baseline data pertaining to domain specific cognitive performance, EEG and homocysteine was acquired at altitude ≤240 m mean sea level (MSL). The volunteers were inducted to an altitude of 4200–4600 m MSL and longitudinal follow-ups were conducted at durations of 03, 12 and 18 months. Neuropsychological assessment was performed for mild cognitive impairment (MCI), attention, information processing rate, visuo-spatial cognition and executive functioning. Total homocysteine (tHcy), vitamin B12 and folic acid were estimated. Mini Mental State Examination (MMSE) showed temporal increase in the percentage prevalence of MCI from 8.17% on 03 months of stay at high altitude to 18.54% on 18 months of stay. Impairment in visuo-spatial executive, attention, delayed recall and procedural memory related cognitive domains were detected following prolonged stay in high altitude. Increase in alpha wave amplitude in the T3, T4 and C3 regions was observed during the follow-ups which was inversely correlated (r = −0.68) to MMSE scores. The tHcy increased proportionately with duration of stay at high altitude and was correlated with MCI. No change in vitamin B12 and folic acid was observed. Our findings suggest that cognitive impairment is progressively associated with duration of stay at high altitude and is correlated with elevated tHcy in the plasma. Moreover, progressive MCI at high altitude occurs despite acclimatization and is independent of vitamin B12 and folic acid. PMID:24988417

  2. Assessment of algorithms for high throughput detection of genomic copy number variation in oligonucleotide microarray data

    PubMed Central

    Baross, Ágnes; Delaney, Allen D; Li, H Irene; Nayar, Tarun; Flibotte, Stephane; Qian, Hong; Chan, Susanna Y; Asano, Jennifer; Ally, Adrian; Cao, Manqiu; Birch, Patricia; Brown-John, Mabel; Fernandes, Nicole; Go, Anne; Kennedy, Giulia; Langlois, Sylvie; Eydoux, Patrice; Friedman, JM; Marra, Marco A

    2007-01-01

    Background Genomic deletions and duplications are important in the pathogenesis of diseases, such as cancer and mental retardation, and have recently been shown to occur frequently in unaffected individuals as polymorphisms. Affymetrix GeneChip whole genome sampling analysis (WGSA) combined with 100 K single nucleotide polymorphism (SNP) genotyping arrays is one of several microarray-based approaches that are now being used to detect such structural genomic changes. The popularity of this technology and its associated open source data format have resulted in the development of an increasing number of software packages for the analysis of copy number changes using these SNP arrays. Results We evaluated four publicly available software packages for high throughput copy number analysis using synthetic and empirical 100 K SNP array data sets, the latter obtained from 107 mental retardation (MR) patients and their unaffected parents and siblings. We evaluated the software with regards to overall suitability for high-throughput 100 K SNP array data analysis, as well as effectiveness of normalization, scaling with various reference sets and feature extraction, as well as true and false positive rates of genomic copy number variant (CNV) detection. Conclusion We observed considerable variation among the numbers and types of candidate CNVs detected by different analysis approaches, and found that multiple programs were needed to find all real aberrations in our test set. The frequency of false positive deletions was substantial, but could be greatly reduced by using the SNP genotype information to confirm loss of heterozygosity. PMID:17910767

  3. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is

  4. Boechera species exhibit species-specific responses to combined heat and high light stress.

    PubMed

    Gallas, Genna; Waters, Elizabeth R

    2015-01-01

    As sessile organisms, plants must be able to complete their life cycle in place and therefore tolerance to abiotic stress has had a major role in shaping biogeographical patterns. However, much of what we know about plant tolerance to abiotic stresses is based on studies of just a few plant species, most notably the model species Arabidopsis thaliana. In this study we examine natural variation in the stress responses of five diverse Boechera (Brassicaceae) species. Boechera plants were exposed to basal and acquired combined heat and high light stress. Plant response to these stresses was evaluated based on chlorophyll fluorescence measurements, induction of leaf chlorosis, and gene expression. Many of the Boechera species were more tolerant to heat and high light stress than A. thaliana. Gene expression data indicates that two important marker genes for stress responses: APX2 (Ascorbate peroxidase 2) and HsfA2 (Heat shock transcription factor A2) have distinct species-specific expression patterns. The findings of species-specific responses and tolerance to stress indicate that stress pathways are evolutionarily labile even among closely related species.

  5. Highly selective nanocomposite sorbents for the specific recognition of S-ibuprofen from structurally related compounds

    NASA Astrophysics Data System (ADS)

    Sooraj, M. P.; Mathew, Beena

    2016-06-01

    The aim of the present work was to synthesize highly homogeneous synthetic recognition units for the selective and specific separation of S-ibuprofen from its closely related structural analogues using molecular imprinting technology. The molecular imprinted polymer wrapped on functionalized multiwalled carbon nanotubes (MWCNT-MIP) was synthesized using S-ibuprofen as the template in the imprinting process. The characterization of the products and intermediates were done by FT-IR spectroscopy, PXRD, TGA, SEM and TEM techniques. The high regression coefficient value for Langmuir adsorption isotherm ( R 2 = 0.999) showed the homogeneous imprint sites and surface adsorption nature of the prepared polymer sorbent. The nano-MIP followed a second-order kinetics ( R 2 = 0.999) with a rapid adsorption rate which also suggested the formation of recognition sites on the surface of MWCNT-MIP. MWCNT-MIP showed 83.6 % higher rebinding capacity than its non-imprinted counterpart. The higher relative selectivity coefficient ( k') of the imprinted sorbent towards S-ibuprofen than that for its structural analogues evidenced the capability of the nano-MIP to selectively and specifically rebind the template rather than its analogues.

  6. UV-LED system to obtain high power density in specific working-plane

    NASA Astrophysics Data System (ADS)

    Li, Renyuan; Sun, Xiuhui; Gou, Jian; Cai, Wentao; Du, Chunlei; Yin, Shaoyun

    2014-11-01

    With the advantages of low cost, small volume, low energy consumption, long service life and environment friendly, the application of UV-LED has attract widespread concern among academia and industry researchers, especially in the field of ink printing industry. However, how to get high power density in specific distance working plane is a technical problem need to be solved eagerly. This paper presents a design solution to reduce the Etendue of the lighting system and therefore obtain high power density. The design uses UV-LED array as the light source, and uses a freeform surface collimating lens array to collimate this light source. In order to improve the energy sufficiency of the system, multipoint fitting-based freeform surface lens design for UV-LED extended sources is proposed to design collimating free-form lens for UV-LED extended source in this work. The freeform surface collimating lens array is placed in front of the UV-LED extended sources array. And an aspherical lens is used in the optical path to focus the light beam. In the simulation, a light source module with the size of 9mm * 26mm has been designed, and obtained power density up to 8W/cm2 in the specific working plane with the working-distance of 3cm. This design is expected to replace the existing mercury lamped-based UV light sources and solve the problem in the application of UV-LED ink printing field.

  7. [Hand injuries resulting from high-pressure injection: lesions specific to industrial oil].

    PubMed

    Obert, L; Lepage, D; Jeunet, D; Gérard, F; Garbuio, P; Tropet, Y

    2002-12-01

    Nineteen cases of high pressure injection injuries in the hand were treated between 1973 and 1998. Same surgical treatment plan was followed in all cases: excision of the penetration point, irrigation, debridement and synovectomy if opening of the flexor sheath was noted, and skin closure to allow early mobilisation. All cases concerned men, work injuries, and volar aspect of the hand. The elapsed time between injection and initial surgery ranged from 1 hour to 1 month with a mean of 6.5 days. Eighteen patients out of 19 were reviewed with a mean follow up of 12 years. In 11/19 cases, (58%) oil was injected. The results of oil injection cases have been analysed: the quantity of oil and the preoperative delay (if more ten hours) are associated with poor functional results or complications. Two amputations, two cases of skin necrosis at the injection point, and one case of infection are reported. One case of oleoma of the thumb is described. The specificity of injuries by industrial oil under pressure must be known: paint or white spirit are more toxic than oil which was in all cases injected in the dominant hand (no high pressure injection tool but a defect in the pipe). An important inflammatory reaction with functional sequelae is caused by foreign bodies in oil. Extraction of oil off the injured tissues is difficult because oil is not visible. A specific information is necessary for farmers and truck driver, very exposed population.

  8. Boechera Species Exhibit Species-Specific Responses to Combined Heat and High Light Stress

    PubMed Central

    Gallas, Genna; Waters, Elizabeth R.

    2015-01-01

    As sessile organisms, plants must be able to complete their life cycle in place and therefore tolerance to abiotic stress has had a major role in shaping biogeographical patterns. However, much of what we know about plant tolerance to abiotic stresses is based on studies of just a few plant species, most notably the model species Arabidopsis thaliana. In this study we examine natural variation in the stress responses of five diverse Boechera (Brassicaceae) species. Boechera plants were exposed to basal and acquired combined heat and high light stress. Plant response to these stresses was evaluated based on chlorophyll fluorescence measurements, induction of leaf chlorosis, and gene expression. Many of the Boechera species were more tolerant to heat and high light stress than A. thaliana. Gene expression data indicates that two important marker genes for stress responses: APX2 (Ascorbate peroxidase 2) and HsfA2 (Heat shock transcription factor A2) have distinct species-specific expression patterns. The findings of species-specific responses and tolerance to stress indicate that stress pathways are evolutionarily labile even among closely related species. PMID:26030823

  9. Specific detection of Aspergillus parasiticus in wheat flour using a highly sensitive PCR assay.

    PubMed

    Sardiñas, Noelia; Vázquez, Covadonga; Gil-Serna, Jessica; González-Jaen, M Teresa; Patiño, Belén

    2010-06-01

    Aspergillus parasiticus is one of the most important aflatoxin-producing species that contaminates foodstuffs and beverages for human consumption. In this work, a specific and highly sensitive PCR protocol was developed to detect A. parasiticus using primers designed on the multicopy internal transcribed region of the rDNA unit (ITS1-5.8S-ITS2 rDNA). The assay proved to be highly specific for A. parasiticus when tested on a wide range of related and other fungal species commonly found in commodities, and allowing discrimination from the closely related A. flavus. Accuracy of detection and quantification by conventional PCR were tested with genomic DNA obtained from wheat flour artificially contaminated with spore suspensions of known concentrations. Spore concentrations equal or higher than 10(6) spore/g could be detected by the assay directly without prior incubation of the samples. The assay described is suitable for incorporation in routine analyses at critical points of the food chain within HACCP strategies. PMID:20486001

  10. Boechera species exhibit species-specific responses to combined heat and high light stress.

    PubMed

    Gallas, Genna; Waters, Elizabeth R

    2015-01-01

    As sessile organisms, plants must be able to complete their life cycle in place and therefore tolerance to abiotic stress has had a major role in shaping biogeographical patterns. However, much of what we know about plant tolerance to abiotic stresses is based on studies of just a few plant species, most notably the model species Arabidopsis thaliana. In this study we examine natural variation in the stress responses of five diverse Boechera (Brassicaceae) species. Boechera plants were exposed to basal and acquired combined heat and high light stress. Plant response to these stresses was evaluated based on chlorophyll fluorescence measurements, induction of leaf chlorosis, and gene expression. Many of the Boechera species were more tolerant to heat and high light stress than A. thaliana. Gene expression data indicates that two important marker genes for stress responses: APX2 (Ascorbate peroxidase 2) and HsfA2 (Heat shock transcription factor A2) have distinct species-specific expression patterns. The findings of species-specific responses and tolerance to stress indicate that stress pathways are evolutionarily labile even among closely related species. PMID:26030823

  11. Hydraulic conductivity, specific yield, and pumpage--High Plains aquifer system, Nebraska

    USGS Publications Warehouse

    Pettijohn, Robert A.; Chen, Hsiu-Hsiung

    1983-01-01

    Hydrologic data used to evalute the ground-water potential of the High Plains aquifer system in Nebraska are presented on maps showing the hydraulic conductivity and specific yield of the aquifer system and the volume and distribution of water pumped for irrigation from the aquifer system during 1980. The High Plains aquifer system underlies 177,000 square miles in parts of eight states, including 64,770 square miles in Nebraska. It consists of the Ogallala Formation and Tertiary and Quaternary deposits that are saturated and hydraulically connected to the Ogallala. The hydraulic conductivity of the aquifer system varies from greater than 200 feet per day in parts of the North Platte, Platte, Elkhorn, and Republican River valleys to less than 25 feet per day in the northwestern part of the state. Specific yield of the aquifer system ranges from 10 to 20 percent in most of the state and averages 16 percent. The estimated volume of water recoverable from the aquifer system in Nebraska is 2,237 million acre-feet. Inches of water withdrawn from the aquifer system during 1980 varied from less than 1.5 in the sandhills of north-central Nebraska to more than 12 in the Platte River and Blue River basins. This withdrawal represents about 6,703,000 acre-feet of ground water. (USGS)

  12. A Symmetrical, Planar SOFC Design for NASA's High Specific Power Density Requirements

    NASA Technical Reports Server (NTRS)

    Cable, Thomas L.; Sofie, Stephen W.

    2007-01-01

    Solid oxide fuel cell (SOFC) systems for aircraft applications require an order of magnitude increase in specific power density (1.0 kW/kg) and long life. While significant research is underway to develop anode supported cells which operate at temperatures in the range of 650-800 C, concerns about Cr-contamination from the metal interconnect may drive the operating temperature down further, to 750 C and lower. Higher temperatures, 900-1000 C, are more favorable for SOFC stacks to achieve specific power densities of 1.0 kW/kg. Since metal interconnects are not practical at these high temperatures and can account for up to 75% of the weight of the stack, NASA is pursuing a design that uses a thin, LaCrO3-based ceramic interconnect that incorporates gas channels into the electrodes. The bi-electrode supported cell (BSC) uses porous YSZ scaffolds, on either side of a 10-20 microns electrolyte. The porous support regions are fabricated with graded porosity using the freeze-tape casting process which can be tailored for fuel and air flow. Removing gas channels from the interconnect simplifies the stack design and allows the ceramic interconnect to be kept thin, on the order of 50 -100 microns. The YSZ electrode scaffolds are infiltrated with active electrode materials following the high temperature sintering step. The NASA-BSC is symmetrical and CTE matched, providing balanced stresses and favorable mechanical properties for vibration and thermal cycling.

  13. Genetic and Sex-Specific Transgenerational Effects of a High Fat Diet in Drosophila melanogaster

    PubMed Central

    Dew-Budd, Kelly; Jarnigan, Julie

    2016-01-01

    An organism's phenotype is the product of its environment and genotype, but an ancestor’s environment can also be a contributing factor. The recent increase in caloric intake and decrease in physical activity of developed nations' populations is contributing to deteriorating health and making the study of the longer term impacts of a changing lifestyle a priority. The dietary habits of ancestors have been shown to affect phenotype in several organisms, including humans, mice, and the fruit fly. Whether the ancestral dietary effect is purely environmental or if there is a genetic interaction with the environment passed down for multiple generations, has not been determined previously. Here we used the fruit fly, Drosophila melanogaster, to investigate the genetic, sex-specific, and environmental effects of a high fat diet for three generations’ on pupal body weights across ten genotypes. We also tested for genotype-specific transgenerational effects on metabolic pools and egg size across three genotypes. We showed that there were substantial differences in transgenerational responses to ancestral diet between genotypes and sexes through both first and second descendant generations. Additionally, there were differences in phenotypes between maternally and paternally inherited dietary effects. We also found a treated organism’s reaction to a high fat diet was not a consistent predictor of its untreated descendants’ phenotype. The implication of these results is that, given our interest in understanding and preventing metabolic diseases like obesity, we need to consider the contribution of ancestral environmental experiences. However, we need to be cautious when drawing population-level generalization from small studies because transgenerational effects are likely to exhibit substantial sex and genotype specificity. PMID:27518304

  14. Genetic and Sex-Specific Transgenerational Effects of a High Fat Diet in Drosophila melanogaster.

    PubMed

    Dew-Budd, Kelly; Jarnigan, Julie; Reed, Laura K

    2016-01-01

    An organism's phenotype is the product of its environment and genotype, but an ancestor's environment can also be a contributing factor. The recent increase in caloric intake and decrease in physical activity of developed nations' populations is contributing to deteriorating health and making the study of the longer term impacts of a changing lifestyle a priority. The dietary habits of ancestors have been shown to affect phenotype in several organisms, including humans, mice, and the fruit fly. Whether the ancestral dietary effect is purely environmental or if there is a genetic interaction with the environment passed down for multiple generations, has not been determined previously. Here we used the fruit fly, Drosophila melanogaster, to investigate the genetic, sex-specific, and environmental effects of a high fat diet for three generations' on pupal body weights across ten genotypes. We also tested for genotype-specific transgenerational effects on metabolic pools and egg size across three genotypes. We showed that there were substantial differences in transgenerational responses to ancestral diet between genotypes and sexes through both first and second descendant generations. Additionally, there were differences in phenotypes between maternally and paternally inherited dietary effects. We also found a treated organism's reaction to a high fat diet was not a consistent predictor of its untreated descendants' phenotype. The implication of these results is that, given our interest in understanding and preventing metabolic diseases like obesity, we need to consider the contribution of ancestral environmental experiences. However, we need to be cautious when drawing population-level generalization from small studies because transgenerational effects are likely to exhibit substantial sex and genotype specificity. PMID:27518304

  15. Image Registration of High-Resolution Uav Data: the New Hypare Algorithm

    NASA Astrophysics Data System (ADS)

    Bahr, T.; Jin, X.; Lasica, R.; Giessel, D.

    2013-08-01

    Unmanned aerial vehicles play an important role in the present-day civilian and military intelligence. Equipped with a variety of sensors, such as SAR imaging modes, E/O- and IR sensor technology, they are due to their agility suitable for many applications. Hence, the necessity arises to use fusion technologies and to develop them continuously. Here an exact image-to-image registration is essential. It serves as the basis for important image processing operations such as georeferencing, change detection, and data fusion. Therefore we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of 39 still images from a high-resolution image stream, acquired with a Aeryon Photo3S™ camera on an Aeryon Scout micro-UAV™.

  16. Applications of an adaptive unstructured solution algorithm to the analysis of high speed flows

    NASA Technical Reports Server (NTRS)

    Thareja, R. R.; Prabhu, R. K.; Morgan, K.; Peraire, J.; Peiro, J.

    1990-01-01

    An upwind cell-centered scheme for the solution of steady laminar viscous high-speed flows is implemented on unstructured two-dimensional meshes. The first-order implementation employs Roe's (1981) approximate Riemann solver, and a higher-order extension is produced by using linear reconstruction with limiting. The procedure is applied to the solution of inviscid subsonic flow over an airfoil, inviscid supersonic flow past a cylinder, and viscous hypersonic flow past a double ellipse. A detailed study is then made of a hypersonic laminar viscous flow on a 24-deg compression corner. It is shown that good agreement is achieved with previous predictions using finite-difference and finite-volume schemes. However, these predictions do not agree with experimental observations. With refinement of the structured grid at the leading edge, good agreement with experimental observations for the distributions of wall pressure, heating rate and skin friction is obtained.

  17. High Resolution Doppler Imager FY 2001,2002,2003 Operations and Algorithm Maintenance

    NASA Technical Reports Server (NTRS)

    Skinner, Wilbert

    2004-01-01

    During the performance period of this grant HRDI (High Resolution Doppler Imager) operations remained nominal. The instrument has suffered no loss of scientific capability and operates whenever sufficient power is available. Generally, there are approximately 5-7 days per month when the power level is too low to permit observations. The daily latitude coverage for HRDI measurements in the mesosphere, lower thermosphere (MLT) region are shown.It shows that during the time of this grant, HRDI operations collected data at a rate comparable to that achieved during the UARS (Upper Atmosphere Research Satellite) prime mission (1991 -1995). Data collection emphasized MLT wind to support the validation efforts of the TIDI instrument on TIMED, therefore fulfilling one of the primary objectives of this phase of the UARS mission. Skinner et al., (2003) present a summary of the instrument performance during this period.

  18. Enhancement tuning and control for high dynamic range images in multi-scale locally adaptive contrast enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.

    2009-01-01

    For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.

  19. Dual Super-Systolic Core for Real-Time Reconstructive Algorithms of High-Resolution Radar/SAR Imaging Systems

    PubMed Central

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  20. Dual super-systolic core for real-time reconstructive algorithms of high-resolution radar/SAR imaging systems.

    PubMed

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode.

  1. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash

    PubMed Central

    Pelletier, Mathew G.

    2008-01-01

    One of the main hurdles standing in the way of optimal cleaning of cotton lint is the lack of sensing systems that can react fast enough to provide the control system with real-time information as to the level of trash contamination of the cotton lint. This research examines the use of programmable graphic processing units (GPU) as an alternative to the PC's traditional use of the central processing unit (CPU). The use of the GPU, as an alternative computation platform, allowed for the machine vision system to gain a significant improvement in processing time. By improving the processing time, this research seeks to address the lack of availability of rapid trash sensing systems and thus alleviate a situation in which the current systems view the cotton lint either well before, or after, the cotton is cleaned. This extended lag/lead time that is currently imposed on the cotton trash cleaning control systems, is what is responsible for system operators utilizing a very large dead-band safety buffer in order to ensure that the cotton lint is not under-cleaned. Unfortunately, the utilization of a large dead-band buffer results in the majority of the cotton lint being over-cleaned which in turn causes lint fiber-damage as well as significant losses of the valuable lint due to the excessive use of cleaning machinery. This research estimates that upwards of a 30% reduction in lint loss could be gained through the use of a tightly coupled trash sensor to the cleaning machinery control systems. This research seeks to improve processing times through the development of a new algorithm for cotton trash sensing that allows for implementation on a highly parallel architecture. Additionally, by moving the new parallel algorithm onto an alternative computing platform, the graphic processing unit “GPU”, for processing of the cotton trash images, a speed up of over 6.5 times, over optimized code running on the PC's central processing unit “CPU”, was gained. The new

  2. An out-of-core high-resolution FFT algorithm for determining large-scale imperfections of surface potentials in crystals

    NASA Astrophysics Data System (ADS)

    Bakhos, M.; Vincent, A. P.; Yuen, D. A.

    2005-06-01

    We present a simple out-of-core algorithm for computing the Fast-Fourier Transform (FFT) needed to determine the two-dimensional potential of surface crystals with large-scale features, like faults, at ultra-high resolution, with around 10 9 grid points. This algorithm represents a proof of concept that a simple and easy-to-code, out-of-core algorithm can be easily implemented and used to solve large-scale problems on low-cost hardware. The main novelties of our algorithm are: (1) elapsed and I/O times decrease with the number of single records (lines) being read; (2) only basic reading and writing routines is necessary for making the out-of-core access. Our method can be easily extended to 3D and be applied to many grand-challenge problems in science and engineering, such as fluid dynamics.

  3. Identification of proteins that form specific complexes with the highly conserved protein Translin in Schizosaccharomyces pombe.

    PubMed

    Eliahoo, Elad; Litovco, Phyana; Ben Yosef, Ron; Bendalak, Keren; Ziv, Tamar; Manor, Haim

    2014-04-01

    Translin is a single-stranded DNA and RNA binding protein that has a high affinity for G-rich sequences. TRAX is a Translin paralog that associates with Translin. Both Translin and TRAX were highly conserved in eukaryotes. The nucleic acid binding form of Translin is a barrel-shaped homo-octamer. A Translin-TRAX hetero-octamer having a similar structure also binds nucleic acids. Previous reports suggested that Translin may be involved in chromosomal translocations, telomere metabolism and the control of mRNA transport and translation. More recent studies have indicated that Translin-TRAX hetero-octamers are involved in RNA silencing. To gain a further insight into the functions of Translin, we have undertaken to systematically search for proteins with which it forms specific complexes in living cells. Here we report the results of such a search conducted in the fission yeast Schizosaccharomyces pombe, a suitable model system. This search was carried out by affinity purification and immuno-precipitation techniques, combined with differential labeling of the intracellular proteins with the stable isotopes ¹⁵N and ¹⁴N. We identified for the first time two proteins containing an RNA Recognition Motif (RRM), which are specifically associated with the yeast Translin: (1) the pre-mRNA-splicing factor srp1 that belongs to the highly conserved SR family of proteins and (2) vip1, a protein conserved in fungi. Our data also support the presence of RNA in these intracellular complexes. Our experimental approach should be generally applicable to studies of weak intracellular protein-protein interactions and provides a clear distinction between false positive vs. truly interacting proteins.

  4. Devices and approaches for generating specific high-affinity nucleic acid aptamers

    NASA Astrophysics Data System (ADS)

    Szeto, Kylan; Craighead, Harold G.

    2014-09-01

    High-affinity and highly specific antibody proteins have played a critical role in biological imaging, medical diagnostics, and therapeutics. Recently, a new class of molecules called aptamers has emerged as an alternative to antibodies. Aptamers are short nucleic acid molecules that can be generated and synthesized in vitro to bind to virtually any target in a wide range of environments. They are, in principal, less expensive and more reproducible than antibodies, and their versatility creates possibilities for new technologies. Aptamers are generated using libraries of nucleic acid molecules with random sequences that are subjected to affinity selections for binding to specific target molecules. This is commonly done through a process called Systematic Evolution of Ligands by EXponential enrichment, in which target-bound nucleic acids are isolated from the pool, amplified to high copy numbers, and then reselected against the desired target. This iterative process is continued until the highest affinity nucleic acid sequences dominate the enriched pool. Traditional selections require a dozen or more laborious cycles to isolate strongly binding aptamers, which can take months to complete and consume large quantities of reagents. However, new devices and insights from engineering and the physical sciences have contributed to a reduction in the time and effort needed to generate aptamers. As the demand for these new molecules increases, more efficient and sensitive selection technologies will be needed. These new technologies will need to use smaller samples, exploit a wider range of chemistries and techniques for manipulating binding, and integrate and automate the selection steps. Here, we review new methods and technologies that are being developed towards this goal, and we discuss their roles in accelerating the availability of novel aptamers.

  5. Differential membrane-based nanocalorimeter for high-resolution measurements of low-temperature specific heat.

    PubMed

    Tagliati, S; Krasnov, V M; Rydh, A

    2012-05-01

    A differential, membrane-based nanocalorimeter for general specific heat studies of very small samples, ranging from 0.5 mg to sub-μg in mass, is described. The calorimeter operates over the temperature range from above room temperature down to 0.5 K. It consists of a pair of cells, each of which is a stack of heaters and thermometer in the center of a silicon nitride membrane, in total giving a background heat capacity less than 100 nJ/K at 300 K, decreasing to 10 pJ/K at 1 K. The device has several distinctive features: (i) The resistive thermometer, made of a Ge(1 - x)Au(x) alloy, displays a high dimensionless sensitivity ∣dlnR∕dlnT∣ ≳ 1 over the entire temperature range. (ii) The sample is placed in direct contact with the thermometer, which is allowed to self-heat. The thermometer can thus be operated at high dc current to increase the resolution. (iii) Data are acquired with a set of eight synchronized lock-in amplifiers measuring dc, 1st and 2nd harmonic signals of heaters and thermometer. This gives high resolution and allows continuous output adjustments without additional noise. (iv) Absolute accuracy is achieved via a variable-frequency-fixed-phase technique in which the measurement frequency is automatically adjusted during the measurements to account for the temperature variation of the sample heat capacity and the device thermal conductance. The performance of the calorimeter is illustrated by studying the heat capacity of a small Au sample and the specific heat of a 2.6 μg piece of superconducting Pb in various magnetic fields.

  6. Tetrathiatriarylmethyl radical with a single aromatic hydrogen as a highly sensitive and specific superoxide probe.

    PubMed

    Liu, Yangping; Song, Yuguang; De Pascali, Francesco; Liu, Xiaoping; Villamena, Frederick A; Zweier, Jay L

    2012-12-01

    Superoxide (O(2)(•-)) plays crucial roles in normal physiology and disease; however, its measurement remains challenging because of the limited sensitivity and/or specificity of prior detection methods. We demonstrate that a tetrathiatriarylmethyl (TAM) radical with a single aromatic hydrogen (CT02-H) can serve as a highly sensitive and specific O(2)(•-) probe. CT02-H is an analogue of the fully substituted TAM radical CT-03 (Finland trityl) with an electron paramagnetic resonance (EPR) doublet signal due to its aromatic hydrogen. Owing to the neutral nature and negligible steric hindrance of the hydrogen, O(2)(•-) preferentially reacts with CT02-H at this site with production of the diamagnetic quinone methide via oxidative dehydrogenation. Upon reaction with O(2)(•-), CT02-H loses its EPR signal and this EPR signal decay can be used to quantitatively measure O(2)(•-). This is accompanied by a change in color from green to purple, with the quinone methide product exhibiting a unique UV-Vis absorbance (ε=15,900 M(-1) cm(-1)) at 540 nm, providing an additional O(2)(•-) detection method. More than five-fold higher reactivity of CT02-H for O(2)(•-) relative to CT-03 was demonstrated, with a second-order rate constant of 1.7×10(4) M(-1) s(-1) compared to 3.1×10(3) M(-1) s(-1) for CT-03. CT02-H exhibited high specificity for O(2)(•-) as evidenced by its inertness to other oxidoreductants. The O(2)(•-) generation rates detected by CT02-H from xanthine/xanthine oxidase were consistent with those measured by cytochrome c reduction but detection sensitivity was 10- to 100-fold higher. EPR detection of CT02-H enabled measurement of very low O(2)(•-) flux with a detection limit of 0.34 nM/min over 120 min. HPLC in tandem with electrochemical detection was used to quantitatively detect the stable quinone methide product and is a highly sensitive and specific method for measurement of O(2)(•-), with a sensitivity limit of ~2×10(-13) mol (10 nM with 20

  7. A high-order compact difference algorithm for half-staggered grids for laminar and turbulent incompressible flows

    NASA Astrophysics Data System (ADS)

    Tyliszczak, Artur

    2014-11-01

    The paper presents a novel, efficient and accurate algorithm for laminar and turbulent flow simulations. The spatial discretisation is performed with help of the compact difference schemes (up to 10th order) for collocated and half-staggered grid arrangements. The time integration is performed by a predictor-corrector approach combined with the projection method for pressure-velocity coupling. At this stage a low order discretisation is introduced which considerably decreases the computational costs. It is demonstrated that such approach does not deteriorate the solution accuracy significantly. Following Boersma B.J. [13] the interpolation formulas developed for staggered uniform meshes are used also in the computations with a non-uniform strongly varying nodes distribution. In the proposed formulation of the projection method such interpolation is performed twice. It is shown that it acts implicitly as a high-order low pass filter and therefore the resulting algorithm is very robust. Its accuracy is first demonstrated based on simple 2D and 3D problems: an inviscid vortex advection, a decay of Taylor-Green vortices, a modified lid-driven cavity flow and a dipole-wall interaction. In periodic flow problems (the first two cases) the solution accuracy exhibits the 10th order behaviour, in the latter cases the 3rd and the 4th order is obtained. Robustness of the proposed method in the computations of turbulent flows is demonstrated for two classical cases: a periodic channel with Reτ=395 and Reτ=590 and a round jet with Re=21 000. The solutions are obtained without any turbulence model and also without any explicit techniques aiming to stabilise the solution. The results are in a very good agreement with literature DNS and LES data, both the mean and r.m.s. values are predicted correctly.

  8. AN ACTIVE-PASSIVE COMBINED ALGORITHM FOR HIGH SPATIAL RESOLUTION RETRIEVAL OF SOIL MOISTURE FROM SATELLITE SENSORS (Invited)

    NASA Astrophysics Data System (ADS)

    Lakshmi, V.; Mladenova, I. E.; Narayan, U.

    2009-12-01

    Soil moisture is known to be an essential factor in controlling the partitioning of rainfall into surface runoff and infiltration and solar energy into latent and sensible heat fluxes. Remote sensing has long proven its capability to obtain soil moisture in near real-time. However, at the present time we have the Advanced Scanning Microwave Radiometer (AMSR-E) on board NASA’s AQUA platform is the only satellite sensor that supplies a soil moisture product. AMSR-E coarse spatial resolution (~ 50 km at 6.9 GHz) strongly limits its applicability for small scale studies. A very promising technique for spatial disaggregation by combining radar and radiometer observations has been demonstrated by the authors using a methodology is based on the assumption that any change in measured brightness temperature and backscatter from one to the next time step is due primarily to change in soil wetness. The approach uses radiometric estimates of soil moisture at a lower resolution to compute the sensitivity of radar to soil moisture at the lower resolution. This estimate of sensitivity is then disaggregated using vegetation water content, vegetation type and soil texture information, which are the variables on which determine the radar sensitivity to soil moisture and are generally available at a scale of radar observation. This change detection algorithm is applied to several locations. We have used aircraft observed active and passive data over Walnut Creek watershed in Central Iowa in 2002; the Little Washita Watershed in Oklahoma in 2003 and the Murrumbidgee Catchment in southeastern Australia for 2006. All of these locations have different soils and land cover conditions which leads to a rigorous test of the disaggregation algorithm. Furthermore, we compare the derived high spatial resolution soil moisture to in-situ sampling and ground observation networks

  9. High divergence in primate-specific duplicated regions: Human and chimpanzee Chorionic Gonadotropin Beta genes

    PubMed Central

    2008-01-01

    Background Low nucleotide divergence between human and chimpanzee does not sufficiently explain the species-specific morphological, physiological and behavioral traits. As gene duplication is a major prerequisite for the emergence of new genes and novel biological processes, comparative studies of human and chimpanzee duplicated genes may assist in understanding the mechanisms behind primate evolution. We addressed the divergence between human and chimpanzee duplicated genomic regions by using Luteinizing Hormone Beta (LHB)/Chorionic Gonadotropin Beta (CGB) gene cluster as a model. The placental CGB genes that are essential for implantation have evolved from an ancestral pituitary LHB gene by duplications in the primate lineage. Results We shotgun sequenced and compared the human (45,165 bp) and chimpanzee (39,876 bp) LHB/CGB regions and hereby present evidence for structural variation resulting in discordant number of CGB genes (6 in human, 5 in chimpanzee). The scenario of species-specific parallel duplications was supported (i) as the most parsimonious solution requiring the least rearrangement events to explain the interspecies structural differences; (ii) by the phylogenetic trees constructed with fragments of intergenic regions; (iii) by the sequence similarity calculations. Across the orthologous regions of LHB/CGB cluster, substitutions and indels contributed approximately equally to the interspecies divergence and the distribution of nucleotide identity was correlated with the regional repeat content. Intraspecies gene conversion may have shaped the LHB/CGB gene cluster. The substitution divergence (1.8–2.59%) exceeded two-three fold the estimates for single-copy loci and the fraction of transversional mutations was increased compared to the unique sequences (43% versus ~30%). Despite the high sequence identity among LHB/CGB genes, there are signs of functional differentiation among the gene copies. Estimates for dn/ds rate ratio suggested a purifying

  10. The efficient production of high specific activity copper-64 using a biomedial cyclotron

    SciTech Connect

    McCarthy, D.W.; Shefer, R.E.; Klinkowstein, R.E.; Bass, L.A.

    1996-05-01

    We have developed a method for the efficient and cost-effective production of high specific activity Cu-64, via the Ni-64(p,n)Cu-64 reaction, using a small biomedical cyclotron. Nickel-64 (95% enriched) has been successfully electroplated on gold disks at thicknesses of {approximately}20-300 {mu}ms and bombarded with protons at beam currents up to {approximately}45 microamps. An automated target has been designed to facilitate the irradiations on a biomedical cyclotron. Techniques have been developed for the rapid and efficient separation of Cu-64 from Ni-64 and other reaction byproducts using ion exchange chromatography. An initial production run using 55 mg of 95% enriched Ni-64 yielded 20 GBq of Cu-64 with specific activity of 4.5 GBq/{mu}g (determined by serial dilution titrations with TETA). In a series of experiments, bombardment of 18.7-23.7 mg of 85% enriched Ni-64 has produced 8.9-18.5 GBq of Cu-64 with specific activity of 4.5 GBq/{mu}g (determined by serial dilution titrations with TETA). In a series of experiments, bombardment of 18.7-23.7 mg of 85% enriched Ni-64 has produced 8.9-18.5 GBq of Cu-64 (133 {plus_minus} 10 MBq/{mu}Ahr) with specific activity of 3.5 GBq-11.5 GBq/{mu}g. The amount and specific activity of the Cu-64 produced is more than adequate for both PET and therapy experiments. The Cu-64 produced in more than adequate for both PET and therapy experiments. The Cu-64 had been used to radiolabel PTSM (pyruvaldehyde bis (N4-methylthiosemicarbazone)-used to quantify blood flow), a monoclonal antibody (1A3) and octreotide. An efficient technique for recycling the costly enriched nickel-64 target material has been developed. Nickel eluted off the separation column is collected, boiled to dryness and redissolved in the electroplating bath. Using this method, 94.2 {plus_minus} 3.2% of the Ni-64 has been recovered. The technique described provides a simple, cost-effective method for the cyclotron production of Cu-64.

  11. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO), launched in 2010, is a NASA-designed spacecraft built to study the Sun. SDO has tight pointing requirements and instruments that are sensitive to spacecraft jitter. Two High Gain Antennas (HGAs) are used to continuously send science data to a dedicated ground station. Preflight analysis showed that jitter resulting from motion of the HGAs was a cause for concern. Three jitter mitigation techniques were developed and implemented to overcome effects of jitter from different sources. These mitigation techniques include: the random step delay, stagger stepping, and the No Step Request (NSR). During the commissioning phase of the mission, a jitter test was performed onboard the spacecraft, in which various sources of jitter were examined to determine their level of effect on the instruments. During the HGA portion of the test, the jitter amplitudes from the single step of a gimbal were examined, as well as the amplitudes due to the execution of various gimbal rates. The jitter levels were compared with the gimbal jitter allocations for each instrument. The decision was made to consider implementing two of the jitter mitigating techniques on board the spacecraft: stagger stepping and the NSR. Flight data with and without jitter mitigation enabled was examined, and it is shown in this paper that HGA tracking is not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. The HGA-induced jitter on the instruments is well within the jitter requirement when the stagger step and NSR mitigation options are enabled.

  12. The Mathematics of High School Physics - Models, Symbols, Algorithmic Operations and Meaning

    NASA Astrophysics Data System (ADS)

    Kanderakis, Nikos

    2016-09-01

    In the seventeenth and eighteenth centuries, mathematicians and physical philosophers managed to study, via mathematics, various physical systems of the sublunar world through idealized and simplified models of these systems, constructed with the help of geometry. By analyzing these models, they were able to formulate new concepts, laws and theories of physics and then through models again, to apply these concepts and theories to new physical phenomena and check the results by means of experiment. Students' difficulties with the mathematics of high school physics are well known. Science education research attributes them to inadequately deep understanding of mathematics and mainly to inadequate understanding of the meaning of symbolic mathematical expressions. There seem to be, however, more causes of these difficulties. One of them, not independent from the previous ones, is the complex meaning of the algebraic concepts used in school physics (e.g. variables, parameters, functions), as well as the complexities added by physics itself (e.g. that equations' symbols represent magnitudes with empirical meaning and units instead of pure numbers). Another source of difficulties is that the theories and laws of physics are often applied, via mathematics, to simplified, and idealized physical models of the world and not to the world itself. This concerns not only the applications of basic theories but also all authentic end-of-the-chapter problems. Hence, students have to understand and participate in a complex interplay between physics concepts and theories, physical and mathematical models, and the real world, often without being aware that they are working with models and not directly with the real world.

  13. Pile-up reconstruction algorithm for high count rate gamma-ray spectrometry

    NASA Astrophysics Data System (ADS)

    Petrovič, T.; Vencelj, M.; Lipoglavšek, M.; Novak, R.; Savran, D.

    2013-04-01

    In high count rate γ-ray spectrometry, the pile-up phenomenon turns out to be an important problem with respect to energy resolution and detection efficiency. Pile-up effects occur when two events are detected so close in time that instrumentation cannot properly extract information from both of them. Because this kind of data is incorrect and marginally useful, such data had to be rejected in traditional pulse processors. In times of digital pulse processing however, one can reconstruct piled-up pulse amplitudes by special algebraic approaches. In fully digital signal acquisition, the moving window deconvolution (MWD) method is commonly used. This method requires two parameters to be carefully set, namely the flattop time (dictated by the maximum rise time of the signal) and the shaping time, to accomplish the best possible energy resolution. In this way, the maximum energy resolution is accomplished, but a lot of piled-up events are rejected, reducing detection efficiency. We propose a method that restores some of the pile-up events, using a parallel block MWD implementation where the shaping time parameter differs for every MWD block. Careful detection of as many true events as possible, as well as determining their exact occurrence in time (their respective timestamps) is the key in getting the most out of the measured signal. With proper analysis logic we get more experimental information through reduced dead time, at the cost of controlled and selectively worsened energy resolution, on an event-by-event basis, achieving better overall detection efficiency. This method was tested on real experimental data where the detection efficiency of our method is higher, by a factor of 4.4(9), than the efficiency of a standard method with pile-up rejection at 500 kcps count rate.

  14. Automata learning algorithms and processes for providing more complete systems requirements specification by scenario generation, CSP-based syntax-oriented model construction, and R2D2C system requirements transformation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Margaria, Tiziana (Inventor); Rash, James L. (Inventor); Rouff, Christopher A. (Inventor); Steffen, Bernard (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, automata learning algorithms and techniques are implemented to generate a more complete set of scenarios for requirements based programming. More specifically, a CSP-based, syntax-oriented model construction, which requires the support of a theorem prover, is complemented by model extrapolation, via automata learning. This may support the systematic completion of the requirements, the nature of the requirement being partial, which provides focus on the most prominent scenarios. This may generalize requirement skeletons by extrapolation and may indicate by way of automatically generated traces where the requirement specification is too loose and additional information is required.

  15. Specific localization of thallium 201 in human high-grade astrocytoma by microautoradiography.

    PubMed

    Mountz, J M; Raymond, P A; McKeever, P E; Modell, J G; Hood, T W; Barthel, L K; Stafford-Schuck, K A

    1989-07-15

    The ability to accurately distinguish remaining or recurrent high-grade astrocytoma from necrosis or edema following treatment is essential to optimal patient management. Thallium 201 planar gamma-camera imaging has been shown to be helpful in detecting recurrent high-grade astrocytoma; however, due to tissue heterogeneity adjacent to and within tumor, the cellular specificity and quantification of 201Tl uptake are largely unknown. In order to determine which tissues are responsible for the radioisotope uptake, microautoradiographic techniques were used to examine multiple tissue sections from five patients with high-grade astrocytoma. Each patient received 5 mCi of 201Tl i.v. 1 h prior to tumor removal. Additionally, all patients received computerized tomographic and 201Tl planar gamma-camera scans prior to surgery. Following surgery, the excised tissue specimens were tentatively classified by gross pathological examination and then immediately processed for dry mount autoradiography; grain density was determined over regions containing tumor, adjacent and uninvolved brain tissue, necrotic tissue, and background. Highly significant differences were found in grain densities (201Tl uptake) between tumor and uninvolved brain tissue, as well as between uninvolved brain tissue and necrotic tissue; there was no significant difference between background grain density and that in necrotic tissue. Mean grain densities (grains/cm2 +/- 1 SD) across patients were: tumor, 102 +/- 23; adjacent, uninvolved brain tissue, 29 +/- 11; necrotic tissue, 6.2 +/- 1.1; and background, 7.0 +/- 4.1. We conclude that the ability of 201Tl to selectively image high-grade astrocytoma is due to its preferential uptake into tumor cells.

  16. High throughput screen identifies small molecule inhibitors specific for Mycobacterium tuberculosis phosphoserine phosphatase.

    PubMed

    Arora, Garima; Tiwari, Prabhakar; Mandal, Rahul Shubhra; Gupta, Arpit; Sharma, Deepak; Saha, Sudipto; Singh, Ramandeep

    2014-09-01

    The emergence of drug-resistant strains of Mycobacterium tuberculosis makes identification and validation of newer drug targets a global priority. Phosphoserine phosphatase (PSP), a key essential metabolic enzyme involved in conversion of O-phospho-l-serine to l-serine, was characterized in this study. The M. tuberculosis genome harbors all enzymes involved in l-serine biosynthesis including two PSP homologs: Rv0505c (SerB1) and Rv3042c (SerB2). In the present study, we have biochemically characterized SerB2 enzyme and developed malachite green-based high throughput assay system to identify SerB2 inhibitors. We have identified 10 compounds that were structurally different from known PSP inhibitors, and few of these scaffolds were highly specific in their ability to inhibit SerB2 enzyme, were noncytotoxic against mammalian cell lines, and inhibited M. tuberculosis growth in vitro. Surface plasmon resonance experiments demonstrated the relative binding for these inhibitors. The two best hits identified in our screen, clorobiocin and rosaniline, were bactericidal in activity and killed intracellular bacteria in a dose-dependent manner. We have also identified amino acid residues critical for these SerB2-small molecule interactions. This is the first study where we validate that M. tuberculosis SerB2 is a druggable and suitable target to pursue for further high throughput assay system screening. PMID:25037224

  17. Evolution of the specific surface area of snow during high-temperature gradient metamorphism

    NASA Astrophysics Data System (ADS)

    Wang, Xuan; Baker, Ian

    2014-