Science.gov

Sample records for highly specific algorithm

  1. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  2. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  3. High-Resolution Snow Projections for Alaska: Regionally and seasonally specific algorithms

    NASA Astrophysics Data System (ADS)

    McAfee, S. A.; Walsh, J. E.; Rupp, S. T.

    2012-12-01

    The fate of Alaska's snow in a warmer world is of both scientific and practical concern. Snow projections are critical for understanding glacier mass balance, forest demographic changes, and for natural resource planning and decision making - such as hydropower facilities in southern and southeastern portions of the state and winter road construction and use in the northern portions. To meet this need, we have developed a set of regionally and seasonally specific statistical models relating long-term average snow-day fraction from average monthly temperature in Alaska. The algorithms were based on temperature data and on daily precipitation and snowfall occurrence for 104 stations from the Global Historical Climatology Network. Although numerous models exist for estimating snow fraction from temperature, the algorithms we present here provide substantial improvements for Alaska. There are fundamental differences in the synoptic conditions across the state, and specific algorithms can accommodate this variability in the relationship between average monthly temperature and typical conditions during snowfall, rainfall, and dry spells. In addition, this set of simple algorithms, unlike more complex physically based models, can be easily and efficiently applied to a large number of future temperature trajectories, facilitating scenario-based planning approaches. Model fits are quite good, with mean errors of the snow-day fractions at most stations within 0.1 of the observed values, which range from 0 to 1, although larger average errors do occur at some sites during the transition seasons. Errors at specific stations are often stable in terms of sign and magnitude across the snowy season, suggesting that site-specific conditions can drive consistent deviations from mean regional conditions. Applying these algorithms to the gridded temperature projections downscaled by the Scenarios Network for Alaska and Arctic Planning, allows us to provide decadal estimates of changes

  4. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  5. Fast ordering algorithm for exact histogram specification.

    PubMed

    Nikolova, Mila; Steidl, Gabriele

    2014-12-01

    This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881

  6. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  7. Advanced CHP Control Algorithms: Scope Specification

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  8. Specific optimization of genetic algorithm on special algebras

    NASA Astrophysics Data System (ADS)

    Habiballa, Hashim; Novak, Vilem; Dyba, Martin; Schenk, Jiri

    2016-06-01

    Searching for complex finite algebras can be succesfully done by the means of genetic algorithm as we showed in former works. This genetic algorithm needs specific optimization of crossover and mutation. We present details about these optimizations which are already implemented in software application for this task - EQCreator.

  9. High-speed CORDIC algorithm

    NASA Astrophysics Data System (ADS)

    El-Guibaly, Fayez; Sabaa, A.

    1996-10-01

    In this paper, we introduce modifications on the classic CORDIC algorithm to reduce the number of iterations, and hence the rounding noise. The modified algorithm needs, at most, half the number of iterations to achieve the same accuracy as the classical one. The modifications are applicable to linear, circular and hyperbolic CORDIC in both vectoring and rotation modes. Simulations illustrate the effect of the new modifications.

  10. Sequence-Specific Copolymer Compatibilizers designed via a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Meenakshisundaram, Venkatesh; Patra, Tarak; Hung, Jui-Hsiang; Simmons, David

    For several decades, block copolymers have been employed as surfactants to reduce interfacial energy for applications from emulsification to surface adhesion. While the simplest approach employs symmetric diblocks, studies have examined asymmetric diblocks, multiblock copolymers, gradient copolymers, and copolymer-grafted nanoparticles. However, there exists no established approach to determining the optimal copolymer compatibilizer sequence for a given application. Here we employ molecular dynamics simulations within a genetic algorithm to identify copolymer surfactant sequences yielding maximum reductions the interfacial energy of model immiscible polymers. The optimal copolymer sequence depends significantly on surfactant concentration. Most surprisingly, at high surface concentrations, where the surfactant achieves the greatest interfacial energy reduction, specific non-periodic sequences are found to significantly outperform any regularly blocky sequence. This emergence of polymer sequence-specificity within a non-sequenced environment adds to a recent body of work suggesting that specific sequence may have the potential to play a greater role in polymer properties than previously understood. We acknowledge the W. M. Keck Foundation for financial support of this research.

  11. GPU-specific reformulations of image compression algorithms

    NASA Astrophysics Data System (ADS)

    Matela, Jiří; Holub, Petr; Jirman, Martin; Årom, Martin

    2012-10-01

    Image compression has a number of applications in various fields, where processing throughput and/or latency is a crucial attribute and the main limitation of state-of-the-art implementations of compression algorithms. At the same time contemporary GPU platforms provide tremendous processing power but they call for specific algorithm design. We discuss key components of successful design of compression algorithms for GPUs and demonstrate this on JPEG and JPEG2000 implementations, each of which contains several types of algorithms requiring different approaches to efficient parallelization for GPUs. Performance evaluation of the optimized JPEG and JPEG2000 chain is used to demonstrate the importance of various aspects of GPU programming, especially with respect to real-time applications.

  12. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  13. High specific heat superconducting composite

    DOEpatents

    Steyert, Jr., William A.

    1979-01-01

    A composite superconductor formed from a high specific heat ceramic such as gadolinium oxide or gadolinium-aluminum oxide and a conventional metal conductor such as copper or aluminum which are insolubly mixed together to provide adiabatic stability in a superconducting mode of operation. The addition of a few percent of insoluble gadolinium-aluminum oxide powder or gadolinium oxide powder to copper, increases the measured specific heat of the composite by one to two orders of magnitude below the 5.degree. K. level while maintaining the high thermal and electrical conductivity of the conventional metal conductor.

  14. High Rate Pulse Processing Algorithms for Microcalorimeters

    NASA Astrophysics Data System (ADS)

    Tan, Hui; Breus, Dimitry; Hennig, Wolfgang; Sabourov, Konstantin; Collins, Jeffrey W.; Warburton, William K.; Bertrand Doriese, W.; Ullom, Joel N.; Bacrania, Minesh K.; Hoover, Andrew S.; Rabin, Michael W.

    2009-12-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensors can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small in order to maintain good energy resolution, and pulse decay times are normally on the order of milliseconds due to slow thermal relaxation. Therefore, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. For large arrays, however, as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of waveform data to a host computer for post-processing. In this paper, we present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in readout electronics that we are also currently developing, is to achieve sufficiently good energy resolution for most applications while being: a) simple enough to be implemented in the readout electronics; and, b) capable of processing overlapping pulses, and thus achieving much higher output count rates than those achieved by existing algorithms. Details of our algorithms are presented, and their performance is compared to that of the "optimal filter" that is currently the predominantly used pulse processing algorithm in the cryogenic-detector community.

  15. High rate pulse processing algorithms for microcalorimeters

    SciTech Connect

    Rabin, Michael; Hoover, Andrew S; Bacrania, Mnesh K; Tan, Hui; Breus, Dimitry; Henning, Wolfgang; Sabourov, Konstantin; Collins, Jeff; Warburton, William K; Dorise, Bertrand; Ullom, Joel N

    2009-01-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensor can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small to maintain good energy resolution, and pulse decay times are normally in the order of milliseconds due to slow thermal relaxation. Consequently, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. Large arrays, however, require as much pulse processing as possible to be performed at the front end of the readout electronics to avoid transferring large amounts of waveform data to a host computer for processing. In this paper, they present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in the readout electronics that they are also currently developing, is to achieve sufficiently good energy resolution for most applications while being (a) simple enough to be implemented in the readout electronics and (b) capable of processing overlapping pulses and thus achieving much higher output count rates than the rates that existing algorithms are currently achieving. Details of these algorithms are presented, and their performance was compared to that of the 'optimal filter' that is the dominant pulse processing algorithm in the cryogenic-detector community.

  16. Orientation estimation algorithm applied to high-spin projectiles

    NASA Astrophysics Data System (ADS)

    Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.

    2014-06-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.

  17. THE HIGH ENERGY TRANSIENT EXPLORER TRIGGERING ALGORITHM

    SciTech Connect

    E. FENIMORE; M. GALASSI

    2001-05-01

    The High Energy Transient Explorer uses a triggering algorithm for gamma-ray bursts that can achieve near the statistical limit by fitting to several background regions to remove trends. Dozens of trigger criteria run simultaneously covering time scales from 80 msec to 10.5 sec or longer. Each criteria is controlled by about 25 constants which gives the flexibility to search wide parameter spaces. On orbit, we have been able to operate at 6{sigma}, a factor of two more sensitive than previous experiments.

  18. Specification of Selected Performance Monitoring and Commissioning Verification Algorithms for CHP Systems

    SciTech Connect

    Brambley, Michael R.; Katipamula, Srinivas

    2006-10-06

    Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.

  19. High specific activity silicon-32

    DOEpatents

    Phillips, D.R.; Brzezinski, M.A.

    1996-06-11

    A process for preparation of silicon-32 is provided and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidation state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.

  20. High specific activity silicon-32

    DOEpatents

    Phillips, Dennis R.; Brzezinski, Mark A.

    1996-01-01

    A process for preparation of silicon-32 is provided and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution to from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidization state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.

  1. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  2. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  3. Ontological Problem-Solving Framework for Assigning Sensor Systems and Algorithms to High-Level Missions

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081

  4. Design specification for the whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.

    1974-01-01

    The necessary requirements and guidelines for the construction of a computer program of the whole-body algorithm are presented. The minimum subsystem models required to effectively simulate the total body response to stresses of interest are (1) cardiovascular (exercise/LBNP/tilt); (2) respiratory (Grodin's model); (3) thermoregulatory (Stolwijk's model); and (4) long-term circulatory fluid and electrolyte (Guyton's model). The whole-body algorithm must be capable of simulating response to stresses from CO2 inhalation, hypoxia, thermal environmental exercise (sitting and supine), LBNP, and tilt (changing body angles in gravity).

  5. On constructing optimistic simulation algorithms for the discrete event system specification

    SciTech Connect

    Nutaro, James J

    2008-01-01

    This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.

  6. High contrast laminography using iterative algorithms

    NASA Astrophysics Data System (ADS)

    Kroupa, M.; Jakubek, J.

    2011-01-01

    3D X-ray imaging of internal structure of large flat objects is often complicated by limited access to all viewing angles or extremely high absorption in certain directions, therefore the standard method of computed tomography (CT) fails. This problem can be solved by the method of laminography. During a laminographic measurement the imaging detector is placed close to the sample while the X-ray source irradiates both sample and detector at different angles. The application of the state-of-the-art pixel detector Medipix in laminography together with adapted tomographic iterative alghorithms for 3D reconstruction of sample structure has been investigated. Iterative algorithms such as EM (Expectation Maximization) and OSEM (Ordered Subset Expectation Maximization) improve the quality of the reconstruction and allow including more complex physical models. In this contribution results and proposed future approaches which could be used for resolution enhancement are presented.

  7. Concurrent constant modulus algorithm and multi-modulus algorithm scheme for high-order QAM signals

    NASA Astrophysics Data System (ADS)

    Rao, Wei

    2011-10-01

    In order to overcome the slow convergence rate and large steady-state mean square error of constant modulus algorithm (CMA), a concurrent constant modulus algorithm and multi-modulus algorithm scheme for high-order QAM signals is proposed, which makes full use of the character which is that the high-order QAM signals locate in the different modulus. This algorithm uses the CMA as the basal mode. And in the second mode it uses the multi-modulus algorithm. Furthermore, the two modes operate concurrently. The efficiency of the method is proved by computer simulations in underwater acoustic channels.

  8. GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.

  9. A Proposed India-Specific Algorithm for Management of Type 2 Diabetes.

    PubMed

    2016-06-01

    Several algorithms and guidelines have been proposed by countries and international professional bodies; however, no recent updated management algorithm is available for Asian Indians. Specifically, algorithms developed and validated in developed nations may not be relevant or applicable to patients in India because of several factors: early age of onset of diabetes, occurrence of diabetes in nonobese and sometimes lean people, differences in the relative contributions of insulin resistance and β-cell dysfunction, marked postprandial glycemia, frequent infections including tuberculosis, low access to healthcare and medications in people of low socioeconomic stratum, ethnic dietary practices (e.g., ingestion of high-carbohydrate diets), and inadequate education regarding hypoglycemia. All these factors should be considered to choose appropriate therapeutic option in this population. The proposed algorithm is simple, suggests less expensive drugs, and tries to provide an effective and comprehensive framework for delivery of diabetes therapy in primary care in India. The proposed guidelines agree with international recommendations in favoring individualization of therapeutic targets as well as modalities of treatment in a flexible manner suitable to the Indian population. PMID:26909751

  10. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  11. C-element: a new clustering algorithm to find high quality functional modules in PPI networks.

    PubMed

    Ghasemi, Mahdieh; Rahgozar, Maseud; Bidkhori, Gholamreza; Masoudi-Nejad, Ali

    2013-01-01

    Graph clustering algorithms are widely used in the analysis of biological networks. Extracting functional modules in protein-protein interaction (PPI) networks is one such use. Most clustering algorithms whose focuses are on finding functional modules try either to find a clique like sub networks or to grow clusters starting from vertices with high degrees as seeds. These algorithms do not make any difference between a biological network and any other networks. In the current research, we present a new procedure to find functional modules in PPI networks. Our main idea is to model a biological concept and to use this concept for finding good functional modules in PPI networks. In order to evaluate the quality of the obtained clusters, we compared the results of our algorithm with those of some other widely used clustering algorithms on three high throughput PPI networks from Sacchromyces Cerevisiae, Homo sapiens and Caenorhabditis elegans as well as on some tissue specific networks. Gene Ontology (GO) analyses were used to compare the results of different algorithms. Each algorithm's result was then compared with GO-term derived functional modules. We also analyzed the effect of using tissue specific networks on the quality of the obtained clusters. The experimental results indicate that the new algorithm outperforms most of the others, and this improvement is more significant when tissue specific networks are used. PMID:24039752

  12. On the importance of FIB-SEM specific segmentation algorithms for porous media

    SciTech Connect

    Salzer, Martin; Thiele, Simon; Zengerle, Roland; Schmidt, Volker

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  13. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063

  14. High-speed scanning: an improved algorithm

    NASA Astrophysics Data System (ADS)

    Nachimuthu, A.; Hoang, Khoi

    1995-10-01

    In using machine vision for assessing an object's surface quality, many images are required to be processed in order to separate the good areas from the defective ones. Examples can be found in the leather hide grading process; in the inspection of garments/canvas on the production line; in the nesting of irregular shapes into a given surface... . The most common method of subtracting the total area from the sum of defective areas does not give an acceptable indication of how much of the `good' area can be used, particularly if the findings are to be used for the nesting of irregular shapes. This paper presents an image scanning technique which enables the estimation of useable areas within an inspected surface in terms of the user's definition, not the supplier's claims. That is, how much useable area the user can use, not the total good area as the supplier estimated. An important application of the developed technique is in the leather industry where the tanner (the supplier) and the footwear manufacturer (the user) are constantly locked in argument due to disputed quality standards of finished leather hide, which disrupts production schedules and wasted costs in re-grading, re- sorting... . The developed basic algorithm for area scanning of a digital image will be presented. The implementation of an improved scanning algorithm will be discussed in detail. The improved features include Boolean OR operations and many other innovative functions which aim at optimizing the scanning process in terms of computing time and the accurate estimation of useable areas.

  15. An improved dehazing algorithm of aerial high-definition image

    NASA Astrophysics Data System (ADS)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  16. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  17. A fast directional algorithm for high-frequency electromagnetic scattering

    SciTech Connect

    Tsuji, Paul; Ying Lexing

    2011-06-20

    This paper is concerned with the fast solution of high-frequency electromagnetic scattering problems using the boundary integral formulation. We extend the O(N log N) directional multilevel algorithm previously proposed for the acoustic scattering case to the vector electromagnetic case. We also detail how to incorporate the curl operator of the magnetic field integral equation into the algorithm. When combined with a standard iterative method, this results in an almost linear complexity solver for the combined field integral equations. In addition, the butterfly algorithm is utilized to compute the far field pattern and radar cross section with O(N log N) complexity.

  18. A High Precision Terahertz Wave Image Reconstruction Algorithm

    PubMed Central

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  19. A High Precision Terahertz Wave Image Reconstruction Algorithm.

    PubMed

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  20. A high capacity 3D steganography algorithm.

    PubMed

    Chao, Min-Wen; Lin, Chao-hung; Yu, Cheng-Wei; Lee, Tong-Yee

    2009-01-01

    In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multilayered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models. PMID:19147891

  1. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto

    2014-10-01

    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  2. Byte structure variable length coding (BS-VLC): a new specific algorithm applied in the compression of trajectories generated by molecular dynamics

    PubMed

    Melo; Puga; Gentil; Brito; Alves; Ramos

    2000-05-01

    Molecular dynamics is a well-known technique very much used in the study of biomolecular systems. The trajectory files produced by molecular dynamics simulations are extensive, and the classical lossless algorithms give poor efficiencies in their compression. In this work, a new specific algorithm, named byte structure variable length coding (BS-VLC), is introduced. Trajectory files, obtained by molecular dynamics applied to trypsin and a trypsin:pancreatic trypsin inhibitor complex, were compressed using four classical lossless algorithms (Huffman, adaptive Huffman, LZW, and LZ77) as well as the BS-VLC algorithm. The results obtained show that BS-VLC nearly triplicates the compression efficiency of the best classical lossless algorithm, preserving a near lossless behavior. Compression efficiencies close to 50% can be obtained with a high degree of precision, and the maximum efficiency possible (75%), within this algorithm, can be performed with good precision. PMID:10850759

  3. Production of high specific activity silicon-32

    SciTech Connect

    Phillips, D.R.; Brzezinski, M.A.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development Project (LDRD) at Los Alamos National Laboratory (LANL). There were two primary objectives for the work performed under this project. The first was to take advantage of capabilities and facilities at Los Alamos to produce the radionuclide {sup 32}Si in unusually high specific activity. The second was to combine the radioanalytical expertise at Los Alamos with the expertise at the University of California to develop methods for the application of {sup 32}Si in biological oceanographic research related to global climate modeling. The first objective was met by developing targetry for proton spallation production of {sup 32}Si in KCl targets and chemistry for its recovery in very high specific activity. The second objective was met by developing a validated field-useable, radioanalytical technique, based upon gas-flow proportional counting, to measure the dynamics of silicon uptake by naturally occurring diatoms.

  4. Development of High Specific Strength Envelope Materials

    NASA Astrophysics Data System (ADS)

    Komatsu, Keiji; Sano, Masa-Aki; Kakuta, Yoshiaki

    Progress in materials technology has produced a much more durable synthetic fabric envelope for the non-rigid airship. Flexible materials are required to form airship envelopes, ballonets, load curtains, gas bags and covering rigid structures. Polybenzoxazole fiber (Zylon) and polyalirate fiber (Vectran) show high specific tensile strength, so that we developed membrane using these high specific tensile strength fibers as a load carrier. The main material developed is a Zylon or Vectran load carrier sealed internally with a polyurethane bonded inner gas retention film (EVOH). The external surface provides weather protecting with, for instance, a titanium oxide integrated polyurethane or Tedlar film. The mechanical test results show that tensile strength 1,000 N/cm is attained with weight less than 230g/m2. In addition to the mechanical properties, temperature dependence of the joint strength and solar absorptivity and emissivity of the surface are measured. 

  5. Benefits Assessment of Algorithmically Combining Generic High Altitude Airspace Sectors

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod; Lai, Chok Fung; Kopardekar, Parimal

    2009-01-01

    In today's air traffic control operations, sectors that have traffic demand below capacity are combined so that fewer controller teams are required to manage air traffic. Controllers in current operations are certified to control a group of six to eight sectors, known as an area of specialization. Sector combinations are restricted to occur within areas of specialization. Since there are few sector combination possibilities in each area of specialization, human supervisors can effectively make sector combination decisions. In the future, automation and procedures will allow any appropriately trained controller to control any of a large set of generic sectors. The primary benefit of this will be increased controller staffing flexibility. Generic sectors will also allow more options for combining sectors, making sector combination decisions difficult for human supervisors. A sector-combining algorithm can assist supervisors as they make generic sector combination decisions. A heuristic algorithm for combining under-utilized air space sectors to conserve air traffic control resources has been described and analyzed. Analysis of the algorithm and comparisons with operational sector combinations indicate that this algorithm could more efficiently utilize air traffic control resources than current sector combinations. This paper investigates the benefits of using the sector-combining algorithm proposed in previous research to combine high altitude generic airspace sectors. Simulations are conducted in which all the high altitude sectors in a center are allowed to combine, as will be possible in generic high altitude airspace. Furthermore, the algorithm is adjusted to use a version of the simplified dynamic density (SDD) workload metric that has been modified to account for workload reductions due to automatic handoffs and Automatic Dependent Surveillance Broadcast (ADS-B). This modified metric is referred to here as future simplified dynamic density (FSDD). Finally

  6. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  7. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  8. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    PubMed Central

    Gan, Wensheng; Zhang, Binbin

    2015-01-01

    Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038

  9. Electromagnetic properties of high specific surface minerals

    NASA Astrophysics Data System (ADS)

    Klein, Katherine Anne

    Interparticle electrical forces play a dominant role in the behaviour of high specific surface minerals, such as clays. This fact encourages the use of small electromagnetic perturbations to assess the microscale properties of these materials. Thus, this research focuses on using electromagnetic waves to understand fundamental particle-particle and particle-fluid interactions, and fabric formation in high specific surface mineral-fluid mixtures (particle size <~1 μm). Topics addressed in this study include: the role of specific surface and double layer phenomena in the engineering behaviour of clay-water-electrolyte mixtures; the interplay between surface conduction, double layer polarization, and interfacial polarization; the relationship between fabric, permittivity, shear wave velocity, and engineering properties in soft slurries; and the effect of ferromagnetic impurities on electromagnetic measurements. The critical role of specific surface on the engineering properties of fine-grained soils is demonstrated through fundamental principles and empirical correlations. Afterwards, the effect of specific surface on the electromagnetic properties of particulate materials is studied using simple microscale analyses of conduction and polarization phenomena in particle-fluid mixtures, and corroborated by experimentation. These results clarify the relative importance of specific surface, water content, electrolyte type, and ionic concentration on the electrical properties of particulate materials. The sensitivity of electromagnetic parameters to particle orientation is addressed in light of the potential assessment of anisotropy in engineering properties. It is shown that effective conductivity measurements provide a robust method to determine electrical anisotropy in particle-fluid mixtures. However, real relative dielectric measurements at frequencies below 1 MHz are unreliable due to electrode effects (especially in highly conductive mixtures). The relationship

  10. Wp specific methylation of highly proliferated LCLs

    SciTech Connect

    Park, Jung-Hoon; Jeon, Jae-Pil; Shim, Sung-Mi; Nam, Hye-Young; Kim, Joon-Woo; Han, Bok-Ghee; Lee, Suman . E-mail: suman@cha.ac.kr

    2007-06-29

    The epigenetic regulation of viral genes may be important for the life cycle of EBV. We determined the methylation status of three viral promoters (Wp, Cp, Qp) from EBV B-lymphoblastoid cell lines (LCLs) by pyrosequencing. Our pyrosequencing data showed that the CpG region of Wp was methylated, but the others were not. Interestingly, Wp methylation was increased with proliferation of LCLs. Wp methylation was as high as 74.9% in late-passage LCLs, but 25.6% in early-passage LCLs. From two Burkitt's lymphoma cell lines, Wp specific hypermethylation was also found (>80%). Interestingly, the expression of EBNA2 gene which located directly next to Wp was associated with its methylation. Our data suggested that Wp specific methylation may be important for the indicator of the proliferation status of LCLs, and the epigenetic viral gene regulation of EBNA2 gene by Wp should be further defined possibly with other biological processes.

  11. Subsemble: an ensemble method for combining subset-specific algorithm fits

    PubMed Central

    Sapp, Stephanie; van der Laan, Mark J.; Canny, John

    2013-01-01

    Ensemble methods using the same underlying algorithm trained on different subsets of observations have recently received increased attention as practical prediction tools for massive datasets. We propose Subsemble: a general subset ensemble prediction method, which can be used for small, moderate, or large datasets. Subsemble partitions the full dataset into subsets of observations, fits a specified underlying algorithm on each subset, and uses a clever form of V-fold cross-validation to output a prediction function that combines the subset-specific fits. We give an oracle result that provides a theoretical performance guarantee for Subsemble. Through simulations, we demonstrate that Subsemble can be a beneficial tool for small to moderate sized datasets, and often has better prediction performance than the underlying algorithm fit just once on the full dataset. We also describe how to include Subsemble as a candidate in a SuperLearner library, providing a practical way to evaluate the performance of Subsemlbe relative to the underlying algorithm fit just once on the full dataset. PMID:24778462

  12. Stride Search: a general algorithm for storm detection in high-resolution climate data

    NASA Astrophysics Data System (ADS)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; Mundt, Miranda R.

    2016-04-01

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.

  13. A high performance hardware implementation image encryption with AES algorithm

    NASA Astrophysics Data System (ADS)

    Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab

    2011-06-01

    This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.

  14. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  15. Production Of High Specific Activity Copper-67

    DOEpatents

    Jamriska, Sr., David J.; Taylor, Wayne A.; Ott, Martin A.; Fowler, Malcolm; Heaton, Richard C.

    2003-10-28

    A process for the selective production and isolation of high specific activity Cu.sup.67 from proton-irradiated enriched Zn.sup.70 target comprises target fabrication, target irradiation with low energy (<25 MeV) protons, chemical separation of the Cu.sup.67 product from the target material and radioactive impurities of gallium, cobalt, iron, and stable aluminum via electrochemical methods or ion exchange using both anion and cation organic ion exchangers, chemical recovery of the enriched Zn.sup.70 target material, and fabrication of new targets for re-irradiation is disclosed.

  16. Production Of High Specific Activity Copper-67

    DOEpatents

    Jamriska, Sr., David J.; Taylor, Wayne A.; Ott, Martin A.; Fowler, Malcolm; Heaton, Richard C.

    2002-12-03

    A process for the selective production and isolation of high specific activity cu.sup.67 from proton-irradiated enriched Zn.sup.70 target comprises target fabrication, target irradiation with low energy (<25 MeV) protons, chemical separation of the Cu.sup.67 product from the target material and radioactive impurities of gallium, cobalt, iron, and stable aluminum via electrochemical methods or ion exchange using both anion and cation organic ion exchangers, chemical recovery of the enriched Zn.sup.70 target material, and fabrication of new targets for re-irradiation is disclosed.

  17. A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kopaczka, Marcin; Wimmer, Andreas; Platsch, Günther; Declerck, Jérôme

    2014-03-01

    Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.

  18. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell has been designed and tested to deliver high capacity at a C/1.5 discharge rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet made at a discharge rate this high in the 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters, performance, and future test plans are described.

  19. International multidimensional authenticity specification (IMAS) algorithm for detection of commercial pomegranate juice adulteration.

    PubMed

    Zhang, Yanjun; Krueger, Dana; Durst, Robert; Lee, Rupo; Wang, David; Seeram, Navindra; Heber, David

    2009-03-25

    The pomegranate fruit ( Punica granatum ) has become an international high-value crop for the production of commercial pomegranate juice (PJ). The perceived consumer value of PJ is due in large part to its potential health benefits based on a significant body of medical research conducted with authentic PJ. To establish criteria for authenticating PJ, a new International Multidimensional Authenticity Specifications (IMAS) algorithm was developed through consideration of existing databases and comprehensive chemical characterization of 45 commercial juice samples from 23 different manufacturers in the United States. In addition to analysis of commercial juice samples obtained in the United States, data from other analyses of pomegranate juice and fruits including samples from Iran, Turkey, Azerbaijan, Syria, India, and China were considered in developing this protocol. There is universal agreement that the presence of a highly constant group of six anthocyanins together with punicalagins characterizes polyphenols in PJ. At a total sugar concentration of 16 degrees Brix, PJ contains characteristic sugars including mannitol at >0.3 g/100 mL. Ratios of glucose to mannitol of 4-15 and of glucose to fructose of 0.8-1.0 are also characteristic of PJ. In addition, no sucrose should be present because of isomerase activity during commercial processing. Stable isotope ratio mass spectrometry as > -25 per thousand assures that there is no added corn or cane sugar added to PJ. Sorbitol was present at <0.025 g/100 mL; maltose and tartaric acid were not detected. The presence of the amino acid proline at >25 mg/L is indicative of added grape products. Malic acid at >0.1 g/100 mL indicates adulteration with apple, pear, grape, cherry, plum, or aronia juice. Other adulteration methods include the addition of highly concentrated aronia, blueberry, or blackberry juices or natural grape pigments to poor-quality juices to imitate the color of pomegranate juice, which results in

  20. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  1. High pressure humidification columns: Design equations, algorithm, and computer code

    SciTech Connect

    Enick, R.M.; Klara, S.M.; Marano, J.J.

    1994-07-01

    This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.

  2. A moving frame algorithm for high Mach number hydrodynamics

    NASA Astrophysics Data System (ADS)

    Trac, Hy; Pen, Ue-Li

    2004-07-01

    We present a new approach to Eulerian computational fluid dynamics that is designed to work at high Mach numbers encountered in astrophysical hydrodynamic simulations. Standard Eulerian schemes that strictly conserve total energy suffer from the high Mach number problem and proposed solutions to additionally solve the entropy or thermal energy still have their limitations. In our approach, the Eulerian conservation equations are solved in an adaptive frame moving with the fluid where Mach numbers are minimized. The moving frame approach uses a velocity decomposition technique to define local kinetic variables while storing the bulk kinetic components in a smoothed background velocity field that is associated with the grid velocity. Gravitationally induced accelerations are added to the grid, thereby minimizing the spurious heating problem encountered in cold gas flows. Separately tracking local and bulk flow components allows thermodynamic variables to be accurately calculated in both subsonic and supersonic regions. A main feature of the algorithm, that is not possible in previous Eulerian implementations, is the ability to resolve shocks and prevent spurious heating where both the pre-shock and post-shock fluid are supersonic. The hybrid algorithm combines the high-resolution shock capturing ability of the second-order accurate Eulerian TVD scheme with a low-diffusion Lagrangian advection scheme. We have implemented a cosmological code where the hydrodynamic evolution of the baryons is captured using the moving frame algorithm while the gravitational evolution of the collisionless dark matter is tracked using a particle-mesh N-body algorithm. Hydrodynamic and cosmological tests are described and results presented. The current code is fast, memory-friendly, and parallelized for shared-memory machines.

  3. Chance of Vulnerability Reduction in Application-Specific NoC through Distance Aware Mapping Algorithm

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi

    2011-08-01

    Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.

  4. Algorithmic tools for mining high-dimensional cytometry data

    PubMed Central

    Chester, Cariad; Maecker, Holden T.

    2015-01-01

    The advent of mass cytometry has lead to an unprecedented increase in the number of analytes measured in individual cells, thereby increasing the complexity and information content of cytometric data. While this technology is ideally suited to detailed examination of the immune system, the applicability of the different methods for analyzing such complex data are less clear. Conventional data analysis by ‘manual’ gating of cells in biaxial dotplots is often subjective, time consuming, and neglectful of much of the information contained in a highly dimensional cytometric dataset. Algorithmic data mining has the promise to eliminate these concerns and several such tools have been recently applied to mass cytometry data. Herein, we review computational data mining tools that have been used to analyze mass cytometry data, outline their differences, and comment on their strengths and limitations. This review will help immunologists identify suitable algorithmic tools for their particular projects. PMID:26188071

  5. Production of high specific activity silicon-32

    DOEpatents

    Phillips, Dennis R.; Brzezinski, Mark A.

    1994-01-01

    A process for preparation of silicon-32 is provide and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution to from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidization state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.

  6. Production of high specific activity silicon-32

    SciTech Connect

    Phillips, D.R.; Brzezinski, M.A.

    1994-09-13

    A process for the preparation of silicon-32 is provided and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution to from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidization state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.

  7. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy.

    PubMed

    Schuemann, J; Dowdell, S; Grassberger, C; Min, C H; Paganetti, H

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2

  8. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be

  9. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  10. Finite element solution for energy conservation using a highly stable explicit integration algorithm

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Manhardt, P. D.

    1972-01-01

    Theoretical derivation of a finite element solution algorithm for the transient energy conservation equation in multidimensional, stationary multi-media continua with irregular solution domain closure is considered. The complete finite element matrix forms for arbitrarily irregular discretizations are established, using natural coordinate function representations. The algorithm is embodied into a user-oriented computer program (COMOC) which obtains transient temperature distributions at the node points of the finite element discretization using a highly stable explicit integration procedure with automatic error control features. The finite element algorithm is shown to posses convergence with discretization for a transient sample problem. The condensed form for the specific heat element matrix is shown to be preferable to the consistent form. Computed results for diverse problems illustrate the versatility of COMOC, and easily prepared output subroutines are shown to allow quick engineering assessment of solution behavior.

  11. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    PubMed Central

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-01-01

    The purpose of this study was to investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for 7 disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head & neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and Monte Carlo algorithms to obtain the average range differences (ARD) and root mean square deviation (RMSD) for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation (ADD) of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing Monte Carlo dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head & neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be needed for breast, lung and head & neck treatments. We conclude that currently used generic range uncertainty margins in proton therapy should be redefined site specific and that complex geometries may require a field specific

  12. An effective algorithm for the generation of patient-specific Purkinje networks in computational electrocardiology

    NASA Astrophysics Data System (ADS)

    Palamara, Simone; Vergara, Christian; Faggiano, Elena; Nobile, Fabio

    2015-02-01

    The Purkinje network is responsible for the fast and coordinated distribution of the electrical impulse in the ventricle that triggers its contraction. Therefore, it is necessary to model its presence to obtain an accurate patient-specific model of the ventricular electrical activation. In this paper, we present an efficient algorithm for the generation of a patient-specific Purkinje network, driven by measures of the electrical activation acquired on the endocardium. The proposed method provides a correction of an initial network, generated by means of a fractal law, and it is based on the solution of Eikonal problems both in the muscle and in the Purkinje network. We present several numerical results both in an ideal geometry with synthetic data and in a real geometry with patient-specific clinical measures. These results highlight an improvement of the accuracy provided by the patient-specific Purkinje network with respect to the initial one. In particular, a cross-validation test shows an accuracy increase of 19% when only the 3% of the total points are used to generate the network, whereas an increment of 44% is observed when a random noise equal to 20% of the maximum value of the clinical data is added to the measures.

  13. High-Speed General Purpose Genetic Algorithm Processor.

    PubMed

    Hoseini Alinodehi, Seyed Pourya; Moshfe, Sajjad; Saber Zaeimian, Masoumeh; Khoei, Abdollah; Hadidi, Khairollah

    2016-07-01

    In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously. PMID:26241984

  14. Robust Optimization Design Algorithm for High-Frequency TWTs

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chevalier, Christine T.

    2010-01-01

    Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.

  15. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell was designed and tested to deliver high capacity at steady discharge rates up to and including a C rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet of any type in a 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters and performance are described. Also covered is an episode of capacity fading due to electrode swelling and its successful recovery by means of additional activation procedures.

  16. GPQuest: A Spectral Library Matching Algorithm for Site-Specific Assignment of Tandem Mass Spectra to Intact N-glycopeptides.

    PubMed

    Toghi Eshghi, Shadi; Shah, Punit; Yang, Weiming; Li, Xingde; Zhang, Hui

    2015-01-01

    Glycoprotein changes occur in not only protein abundance but also the occupancy of each glycosylation site by different glycoforms during biological or pathological processes. Recent advances in mass spectrometry instrumentation and techniques have facilitated analysis of intact glycopeptides in complex biological samples by allowing the users to generate spectra of intact glycopeptides with glycans attached to each specific glycosylation site. However, assigning these spectra, leading to identification of the glycopeptides, is challenging. Here, we report an algorithm, named GPQuest, for site-specific identification of intact glycopeptides using higher-energy collisional dissociation (HCD) fragmentation of complex samples. In this algorithm, a spectral library of glycosite-containing peptides in the sample was built by analyzing the isolated glycosite-containing peptides using HCD LC-MS/MS. Spectra of intact glycopeptides were selected by using glycan oxonium ions as signature ions for glycopeptide spectra. These oxonium-ion-containing spectra were then compared with the spectral library generated from glycosite-containing peptides, resulting in assignment of each intact glycopeptide MS/MS spectrum to a specific glycosite-containing peptide. The glycan occupying each glycosite was determined by matching the mass difference between the precursor ion of intact glycopeptide and the glycosite-containing peptide to a glycan database. Using GPQuest, we analyzed LC-MS/MS spectra of protein extracts from prostate tumor LNCaP cells. Without enrichment of glycopeptides from global tryptic peptides and at a false discovery rate of 1%, 1008 glycan-containing MS/MS spectra were assigned to 769 unique intact N-linked glycopeptides, representing 344 N-linked glycosites with 57 different N-glycans. Spectral library matching using GPQuest assigns the HCD LC-MS/MS generated spectra of intact glycopeptides in an automated and high-throughput manner. Additionally, spectral library

  17. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea

  18. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  19. Trajectories for High Specific Impulse High Specific Power Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, T.; Adams, R. B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Preliminary results are presented for two methods to approximate the mission performance of high specific impulse high specific power vehicles. The first method is based on an analytical approximation derived by Williams and Shepherd and can be used to approximate mission performance to outer planets and interstellar space. The second method is based on a parametric analysis of trajectories created using the well known trajectory optimization code, VARITOP. This parametric analysis allows the reader to approximate payload ratios and optimal power requirements for both one-way and round-trip missions. While this second method only addresses missions to and from Jupiter, future work will encompass all of the outer planet destinations and some interstellar precursor missions.

  20. Sensitivity of snow density and specific surface area measured by microtomography to different image processing algorithms

    NASA Astrophysics Data System (ADS)

    Hagenmuller, Pascal; Matzl, Margret; Chambon, Guillaume; Schneebeli, Martin

    2016-05-01

    Microtomography can measure the X-ray attenuation coefficient in a 3-D volume of snow with a spatial resolution of a few microns. In order to extract quantitative characteristics of the microstructure, such as the specific surface area (SSA), from these data, the greyscale image first needs to be segmented into a binary image of ice and air. Different numerical algorithms can then be used to compute the surface area of the binary image. In this paper, we report on the effect of commonly used segmentation and surface area computation techniques on the evaluation of density and specific surface area. The evaluation is based on a set of 38 X-ray tomographies of different snow samples without impregnation, scanned with an effective voxel size of 10 and 18 μm. We found that different surface area computation methods can induce relative variations up to 5 % in the density and SSA values. Regarding segmentation, similar results were obtained by sequential and energy-based approaches, provided the associated parameters were correctly chosen. The voxel size also appears to affect the values of density and SSA, but because images with the higher resolution also show the higher noise level, it was not possible to draw a definitive conclusion on this effect of resolution.

  1. Parallel algorithms for high-speed SAR processing

    NASA Astrophysics Data System (ADS)

    Mallorqui, Jordi J.; Bara, Marc; Broquetas, Antoni; Wis, Mariano; Martinez, Antonio; Nogueira, Leonardo; Moreno, Victoriano

    1998-11-01

    The mass production of SAR products and its usage on monitoring emergency situations (oil spill detection, floods, etc.) requires high-speed SAR processors. Two different parallel strategies for near real time SAR processing based on a multiblock version of the Chirp Scaling Algorithm (CSA) have been studied. The first one is useful for small companies that would like to reduce computation times with no extra investment. It uses a cluster of heterogeneous UNIX workstations as a parallel computer. The second one is oriented to institutions, which have to process large amounts of data in short times and can afford the cost of large parallel computers. The parallel programming has reduced in both cases the computational times when compared with the sequential versions.

  2. Algorithms for a very high speed universal noiseless coding module

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Yeh, Pen-Shu

    1991-01-01

    The algorithmic definitions and performance characterizations are presented for a high performance adaptive coding module. Operation of at least one of these (single chip) implementations is expected to exceed 500 Mbits/s under laboratory conditions. Operation of a companion decoding module should operate at up to half the coder's rate. The module incorporates a powerful noiseless coder for Standard Form Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers where the smaller integers are more likely than the larger ones). Performance close to data entropies can be expected over a Dynamic Range of from 1.5 to 12 to 14 bits/sample (depending on the implementation).

  3. High specific activity platinum-195m

    DOEpatents

    Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.

    2004-10-12

    A new composition of matter includes .sup.195m Pt characterized by a specific activity of at least 30 mCi/mg Pt, generally made by method that includes the steps of: exposing .sup.193 Ir to a flux of neutrons sufficient to convert a portion of the .sup.193 Ir to .sup.195m Pt to form an irradiated material; dissolving the irradiated material to form an intermediate solution comprising Ir and Pt; and separating the Pt from the Ir by cation exchange chromatography to produce .sup.195m Pt.

  4. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  5. Algorithms for high-speed universal noiseless coding

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Yeh, Pen-Shu; Miller, Warner

    1993-01-01

    This paper provides the basic algorithmic definitions and performance characterizations for a high-performance adaptive noiseless (lossless) 'coding module' which is currently under separate developments as single-chip microelectronic circuits at two NASA centers. Laboratory tests of one of these implementations recently demonstrated coding rates of up to 900 Mbits/s. Operation of a companion 'decoding module' can operate at up to half the coder's rate. The functionality provided by these modules should be applicable to most of NASA's science data. The hardware modules incorporate a powerful adaptive noiseless coder for 'standard form' data sources (i.e., sources whose symbols can be represented by uncorrelated nonnegative integers where the smaller integers are more likely than the larger ones). Performance close to data entries can be expected over a 'dynamic range' of from 1.5 to 12-15 bits/sample (depending on the implementation). This is accomplished by adaptively choosing the best of many Huffman equivalent codes to use on each block of 1-16 samples. Because of the extreme simplicity of these codes no table lookups are actually required in an implementation, thus leading to the expected very high data rate capabilities already noted.

  6. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  7. Heuristic-based scheduling algorithm for high level synthesis

    NASA Technical Reports Server (NTRS)

    Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye

    1992-01-01

    A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.

  8. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    PubMed Central

    Ramyachitra, D.; Sofia, M.; Manikandan, P.

    2015-01-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  9. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    PubMed

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  10. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty

    NASA Astrophysics Data System (ADS)

    Ling, J.; Templeton, J.

    2015-08-01

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. Feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.

  11. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty

    SciTech Connect

    Ling, Julia; Templeton, Jeremy Alan

    2015-08-04

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.

  12. Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty

    DOE PAGESBeta

    Ling, Julia; Templeton, Jeremy Alan

    2015-08-04

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less

  13. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty

    DOE PAGESBeta

    Ling, Julia; Templeton, Jeremy Alan

    2015-08-04

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less

  14. Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty

    SciTech Connect

    Ling, Julia; Templeton, Jeremy Alan

    2015-08-04

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.

  15. An Algorithm for the Segmentation of Highly Abnormal Hearts Using a Generic Statistical Shape Model.

    PubMed

    Alba, Xenia; Pereanez, Marco; Hoogendoorn, Corne; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F; Lekadir, Karim

    2016-03-01

    Statistical shape models (SSMs) have been widely employed in cardiac image segmentation. However, in conditions that induce severe shape abnormality and remodeling, such as in the case of pulmonary hypertension (PH) or hypertrophic cardiomyopathy (HCM), a single SSM is rarely capable of capturing the anatomical variability in the extremes of the distribution. This work presents a new algorithm for the segmentation of severely abnormal hearts. The algorithm is highly flexible, as it does not require a priori knowledge of the involved pathology or any specific parameter tuning to be applied to the cardiac image under analysis. The fundamental idea is to approximate the gross effect of the abnormality with a virtual remodeling transformation between the patient-specific geometry and the average shape of the reference model (e.g., average normal morphology). To define this mapping, a set of landmark points are automatically identified during boundary point search, by estimating the reliability of the candidate points. With the obtained transformation, the feature points extracted from the patient image volume are then projected onto the space of the reference SSM, where the model is used to effectively constrain and guide the segmentation process. The extracted shape in the reference space is finally propagated back to the original image of the abnormal heart to obtain the final segmentation. Detailed validation with patients diagnosed with PH and HCM shows the robustness and flexibility of the technique for the segmentation of highly abnormal hearts of different pathologies. PMID:26552082

  16. [Fast segmentation algorithm of high resolution remote sensing image based on multiscale mean shift].

    PubMed

    Wang, Lei-Guang; Zheng, Chen; Lin, Li-Yu; Chen, Rong-Yuan; Mei, Tian-Can

    2011-01-01

    Mean Shift algorithm is a robust approach toward feature space analysis and it has been used wildly for natural scene image and medical image segmentation. However, high computational complexity of the algorithm has constrained its application in remote sensing images with massive information. A fast image segmentation algorithm is presented by extending traditional mean shift method to wavelet domain. In order to evaluate the effectiveness of the proposed algorithm, multispectral remote sensing image and synthetic image are utilized. The results show that the proposed algorithm can improve the speed 5-7 times compared to the traditional MS method in the premise of segmentation quality assurance. PMID:21428083

  17. Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.

  18. High School Educational Specifications: Facilities Planning Standards. Edition I.

    ERIC Educational Resources Information Center

    Jefferson County School District R-1, Denver, CO.

    The Jefferson County School District (Colorado) has developed a manual of high school specifications for Design Advisory Groups and consultants to use for planning and designing the district's high school facilities. The specifications are provided to help build facilities that best meet the educational needs of the students to be served.…

  19. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  20. High-Performance Algorithm for Solving the Diagnosis Problem

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Vatan, Farrokh

    2009-01-01

    An improved method of model-based diagnosis of a complex engineering system is embodied in an algorithm that involves considerably less computation than do prior such algorithms. This method and algorithm are based largely on developments reported in several NASA Tech Briefs articles: The Complexity of the Diagnosis Problem (NPO-30315), Vol. 26, No. 4 (April 2002), page 20; Fast Algorithms for Model-Based Diagnosis (NPO-30582), Vol. 29, No. 3 (March 2005), page 69; Two Methods of Efficient Solution of the Hitting-Set Problem (NPO-30584), Vol. 29, No. 3 (March 2005), page 73; and Efficient Model-Based Diagnosis Engine (NPO-40544), on the following page. Some background information from the cited articles is prerequisite to a meaningful summary of the innovative aspects of the present method and algorithm. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD. Diagnosis the task of finding faulty components is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. The calculation of a minimal diagnosis is inherently a hard problem, the solution of which requires amounts of computation time and memory that increase exponentially with the number of components of the engineering system. Among the developments to reduce the computational burden, as reported in the cited articles, is the mapping of the diagnosis problem onto the integer-programming (IP) problem. This mapping makes it possible to utilize a variety of algorithms developed previously

  1. SU-E-T-305: Study of the Eclipse Electron Monte Carlo Algorithm for Patient Specific MU Calculations

    SciTech Connect

    Wang, X; Qi, S; Agazaryan, N; DeMarco, J

    2014-06-01

    Purpose: To evaluate the Eclipse electron Monte Carlo (eMC) algorithm based on patient specific monitor unit (MU) calculations, and to propose a new factor which quantitatively predicts the discrepancy of MUs between the eMC algorithm and hand calculations. Methods: Electron treatments were planned for 61 patients on Eclipse (Version 10.0) using the eMC algorithm for Varian TrueBeam linear accelerators. For each patient, the same treatment beam angle was kept for a point dose calculation at dmax performed with the reference condition, which used an open beam with a 15×15 cm2 size cone and 100 SSD. A patient specific correction factor (PCF) was obtained by getting the ratio between this point dose and the calibration dose, which is 1 cGy per MU delivered at dmax. The hand calculation results were corrected by the PCFs and compared with MUs from the treatment plans. Results: The MU from the treatment plans were in average (7.1±6.1)% higher than the hand calculations. The average MU difference between the corrected hand calculations and the eMC treatment plans was (0.07±3.48)%. A correlation coefficient of 0.8 was found between (1-PCF) and the percentage difference between the treatment plan and hand calculations. Most outliers were treatment plans with small beam opening (< 4 cm) and low energy beams (6 and 9 MeV). Conclusion: For CT-based patient treatment plans, the eMC algorithm tends to generate a larger MU than hand calculations. Caution should be taken for eMC patient plans with small field sizes and low energy beams. We hypothesize that the PCF ratio reflects the influence of patient surface curvature and tissue inhomogeneity to patient specific percent depth dose (PDD) curve and MU calculations in eMC algorithm.

  2. Stride search: A general algorithm for storm detection in high resolution climate data

    SciTech Connect

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.

  3. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGESBeta

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  4. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  5. A new adaptive GMRES algorithm for achieving high accuracy

    SciTech Connect

    Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.

    1996-12-31

    GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.

  6. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  7. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  8. Automated coronary artery calcium scoring from non-contrast CT using a patient-specific algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Xiaowei; Slomka, Piotr J.; Diaz-Zamudio, Mariana; Germano, Guido; Berman, Daniel S.; Terzopoulos, Demetri; Dey, Damini

    2015-03-01

    Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.

  9. ASYMPTOTICALLY OPTIMAL HIGH-ORDER ACCURATE ALGORITHMS FOR THE SOLUTION OF CERTAIN ELLIPTIC PDEs

    SciTech Connect

    Leonid Kunyansky, PhD

    2008-11-26

    The main goal of the project, "Asymptotically Optimal, High-Order Accurate Algorithms for the Solution of Certain Elliptic PDE's" (DE-FG02-03ER25577) was to develop fast, high-order algorithms for the solution of scattering problems and spectral problems of photonic crystals theory. The results we obtained lie in three areas: (1) asymptotically fast, high-order algorithms for the solution of eigenvalue problems of photonics, (2) fast, high-order algorithms for the solution of acoustic and electromagnetic scattering problems in the inhomogeneous media, and (3) inversion formulas and fast algorithms for the inverse source problem for the acoustic wave equation, with applications to thermo- and opto- acoustic tomography.

  10. An Analytic Approximation to Very High Specific Impulse and Specific Power Interplanetary Space Mission Analysis

    NASA Technical Reports Server (NTRS)

    Williams, Craig Hamilton

    1995-01-01

    A simple, analytic approximation is derived to calculate trip time and performance for propulsion systems of very high specific impulse (50,000 to 200,000 seconds) and very high specific power (10 to 1000 kW/kg) for human interplanetary space missions. The approach assumed field-free space, constant thrust/constant specific power, and near straight line (radial) trajectories between the planets. Closed form, one dimensional equations of motion for two-burn rendezvous and four-burn round trip missions are derived as a function of specific impulse, specific power, and propellant mass ratio. The equations are coupled to an optimizing parameter that maximizes performance and minimizes trip time. Data generated for hypothetical one-way and round trip human missions to Jupiter were found to be within 1% and 6% accuracy of integrated solutions respectively, verifying that for these systems, credible analysis does not require computationally intensive numerical techniques.

  11. Viewer preferences for classes of noise removal algorithms for high definition content

    NASA Astrophysics Data System (ADS)

    Deshpande, Sachin

    2012-03-01

    Perceived video quality studies were performed on a number of key classes of noise removal algorithms to determine viewer preference. The noise removal algorithm classes represent increase in complexity from linear filter to nonlinear filter to adaptive filter to spatio-temporal filter. The subjective results quantify the perceived quality improvements that can be obtained with increasing complexity. The specific algorithm classes tested include: linear spatial one channel filter, nonlinear spatial two-channel filter, adaptive nonlinear spatial filter, multi-frame spatio-temporal adaptive filter. All algorithms were applied on full HD (1080P) content. Our subjective results show that spatio-temporal (multi-frame) noise removal algorithm performs best amongst the various algorithm classes. The spatio-temporal algorithm improvement compared to original video sequences is statistically significant. On the average, noise-removed video sequences are preferred over original (noisy) video sequences. The Adaptive bilateral and non-adaptive bilateral two channel noise removal algorithms perform similarly on the average thus suggesting that a non-adaptive parameter tuned algorithm may be adequate.

  12. Trajectories for High Specific Impulse High Specific Power Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, Tara; Adams, Robert B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Flight times and deliverable masses for electric and fusion propulsion systems are difficult to approximate. Numerical integration is required for these continuous thrust systems. Many scientists are not equipped with the tools and expertise to conduct interplanetary and interstellar trajectory analysis for their concepts. Several charts plotting the results of well-known trajectory simulation codes were developed and are contained in this paper. These charts illustrate the dependence of time of flight and payload ratio on jet power, initial mass, specific impulse and specific power. These charts are intended to be a tool by which people in the propulsion community can explore the possibilities of their propulsion system concepts. Trajectories were simulated using the tools VARITOP and IPOST. VARITOP is a well known trajectory optimization code that involves numerical integration based on calculus of variations. IPOST has several methods of trajectory simulation; the one used in this paper is Cowell's method for full integration of the equations of motion. An analytical method derived in the companion paper was also evaluated. The accuracy of this method is discussed in the paper.

  13. A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case

    PubMed Central

    Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing

    2014-01-01

    This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038

  14. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    NASA Astrophysics Data System (ADS)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  15. Performing target specific band reduction using artificial neural networks and assessment of its efficacy using various target detection algorithms

    NASA Astrophysics Data System (ADS)

    Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.

    2016-04-01

    Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target detection is verified. The results of ANN proposes discriminating band range for targets; these ranges were further used to perform target detection using four popular spectral matching target detection algorithm. Further, the results of algorithms were analyzed using ROC curves to evaluate the effectiveness of the ranges suggested by ANN over full spectrum for detection of desired targets. In addition, comparative assessment of algorithms is also performed using ROC.

  16. Development of a high-specific-speed centrifugal compressor

    SciTech Connect

    Rodgers, C.

    1997-07-01

    This paper describes the development of a subscale single-stage centrifugal compressor with a dimensionless specific speed (Ns) of 1.8, originally designed for full-size application as a high volume flow, low pressure ratio, gas booster compressor. The specific stage is noteworthy in that it provides a benchmark representing the performance potential of very high-specific-speed compressors, of which limited information is found in the open literature. Stage and component test performance characteristics are presented together with traverse results at the impeller exit. Traverse test results were compared with recent CFD computational predictions for an exploratory analytical calibration of a very high-specific-speed impeller geometry. The tested subscale (0.583) compressor essentially satisfied design performance expectations with an overall stage efficiency of 74% including, excessive exit casing losses. It was estimated that stage efficiency could be increased to 81% with exit casing losses halved.

  17. A novel algorithm for simultaneous SNP selection in high-dimensional genome-wide association studies

    PubMed Central

    2012-01-01

    Background Identification of causal SNPs in most genome wide association studies relies on approaches that consider each SNP individually. However, there is a strong correlation structure among SNPs that needs to be taken into account. Hence, increasingly modern computationally expensive regression methods are employed for SNP selection that consider all markers simultaneously and thus incorporate dependencies among SNPs. Results We develop a novel multivariate algorithm for large scale SNP selection using CAR score regression, a promising new approach for prioritizing biomarkers. Specifically, we propose a computationally efficient procedure for shrinkage estimation of CAR scores from high-dimensional data. Subsequently, we conduct a comprehensive comparison study including five advanced regression approaches (boosting, lasso, NEG, MCP, and CAR score) and a univariate approach (marginal correlation) to determine the effectiveness in finding true causal SNPs. Conclusions Simultaneous SNP selection is a challenging task. We demonstrate that our CAR score-based algorithm consistently outperforms all competing approaches, both uni- and multivariate, in terms of correctly recovered causal SNPs and SNP ranking. An R package implementing the approach as well as R code to reproduce the complete study presented here is available from http://strimmerlab.org/software/care/. PMID:23113980

  18. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  19. High performance cone-beam spiral backprojection with voxel-specific weighting

    NASA Astrophysics Data System (ADS)

    Steckmann, Sven; Knaup, Michael; Kachelrieß, Marc

    2009-06-01

    Cone-beam spiral backprojection is computationally highly demanding. At first sight, the backprojection requirements are similar to those of cone-beam backprojection from circular scans such as it is performed in the widely used Feldkamp algorithm. However, there is an additional complication: the illumination of each voxel, i.e. the range of angles the voxel is seen by the x-ray cone, is a complex function of the voxel position. In general, one needs to multiply a voxel-specific weight w(x, y, z, α) prior to adding a projection from angle α to a voxel at position x, y, z. Often, the weight function has no analytically closed form and must be numerically determined. Storage of the weights is prohibitive since the amount of memory required equals the number of voxels per spiral rotation times the number of projections a voxel receives contributions and therefore is in the order of up to 1012 floating point values for typical spiral scans. We propose a new algorithm that combines the spiral symmetry with the ability of today's 64 bit operating systems to store large amounts of precomputed weights, even above the 4 GB limit. Our trick is to backproject into slices that are rotated in the same manner as the spiral trajectory rotates. Using the spiral symmetry in this way allows one to exploit data-level paralellism and thereby to achieve a very high level of vectorization. An additional postprocessing step rotates these slices back to normal images. Our new backprojection algorithm achieves up to 17 giga voxel updates per second on our systems that are equipped with four standard Intel X7460 hexa core CPUs (Intel Xeon 7300 platform, 2.66 GHz, Intel Corporation). This equals the reconstruction of 344 images per second assuming that each slice consists of 512 × 512 pixels and receives contributions from 512 projections. Thereby, it is an order of magnitude faster than a highly optimized code that does not make use of the spiral symmetry. In its present version, the

  20. High performance cone-beam spiral backprojection with voxel-specific weighting.

    PubMed

    Steckmann, Sven; Knaup, Michael; Kachelriess, Marc

    2009-06-21

    Cone-beam spiral backprojection is computationally highly demanding. At first sight, the backprojection requirements are similar to those of cone-beam backprojection from circular scans such as it is performed in the widely used Feldkamp algorithm. However, there is an additional complication: the illumination of each voxel, i.e. the range of angles the voxel is seen by the x-ray cone, is a complex function of the voxel position. In general, one needs to multiply a voxel-specific weight w(x, y, z, alpha) prior to adding a projection from angle alpha to a voxel at position x, y, z. Often, the weight function has no analytically closed form and must be numerically determined. Storage of the weights is prohibitive since the amount of memory required equals the number of voxels per spiral rotation times the number of projections a voxel receives contributions and therefore is in the order of up to 10(12) floating point values for typical spiral scans. We propose a new algorithm that combines the spiral symmetry with the ability of today's 64 bit operating systems to store large amounts of precomputed weights, even above the 4 GB limit. Our trick is to backproject into slices that are rotated in the same manner as the spiral trajectory rotates. Using the spiral symmetry in this way allows one to exploit data-level paralellism and thereby to achieve a very high level of vectorization. An additional postprocessing step rotates these slices back to normal images. Our new backprojection algorithm achieves up to 17 giga voxel updates per second on our systems that are equipped with four standard Intel X7460 hexa core CPUs (Intel Xeon 7300 platform, 2.66 GHz, Intel Corporation). This equals the reconstruction of 344 images per second assuming that each slice consists of 512 x 512 pixels and receives contributions from 512 projections. Thereby, it is an order of magnitude faster than a highly optimized code that does not make use of the spiral symmetry. In its present version

  1. Algorithms and architectures for high performance analysis of semantic graphs.

    SciTech Connect

    Hendrickson, Bruce Alan

    2005-09-01

    analysis. Since intelligence datasets can be extremely large, the focus of this work is on the use of parallel computers. We have been working to develop scalable parallel algorithms that will be at the core of a semantic graph analysis infrastructure. Our work has involved two different thrusts, corresponding to two different computer architectures. The first architecture of interest is distributed memory, message passing computers. These machines are ubiquitous and affordable, but they are challenging targets for graph algorithms. Much of our distributed-memory work to date has been collaborative with researchers at Lawrence Livermore National Laboratory and has focused on finding short paths on distributed memory parallel machines. Our implementation on 32K processors of BlueGene/Light finds shortest paths between two specified vertices in just over a second for random graphs with 4 billion vertices.

  2. Loop tuning with specification on gain and phase margins via modified second-order sliding mode control algorithm

    NASA Astrophysics Data System (ADS)

    Boiko, I. M.

    2012-01-01

    The modified second-order sliding mode algorithm is used for controller tuning. Namely, the modified suboptimal algorithm-based test (modified SOT) and non-parametric tuning rules for proportional-integral-derivative (PID) controllers are presented in this article. In the developed method of test and tuning, the idea of coordinated selection of the test parameters and the controller tuning parameters is introduced. The proposed approach allows for the formulation of simple non-parametric tuning rules for PID controllers that provide desired amplitude or phase margins exactly. In the modified SOT, the frequency of the self-excited oscillations can be generated equal to either the phase crossover frequency or the magnitude crossover frequency of the open-loop system frequency response (including a future PID controller) - depending on the tuning method choice. The first option will provide tuning with specification on gain margin, and the second option will ensure tuning with specification on phase margin. Tuning rules for a PID controller and simulation examples are provided.

  3. Noncovalent functionalization of carbon nanotubes for highly specific electronic biosensors

    NASA Astrophysics Data System (ADS)

    Chen, Robert J.; Bangsaruntip, Sarunya; Drouvalakis, Katerina A.; Wong Shi Kam, Nadine; Shim, Moonsub; Li, Yiming; Kim, Woong; Utz, Paul J.; Dai, Hongjie

    2003-04-01

    Novel nanomaterials for bioassay applications represent a rapidly progressing field of nanotechnology and nanobiotechnology. Here, we present an exploration of single-walled carbon nanotubes as a platform for investigating surface-protein and protein-protein binding and developing highly specific electronic biomolecule detectors. Nonspecific binding on nanotubes, a phenomenon found with a wide range of proteins, is overcome by immobilization of polyethylene oxide chains. A general approach is then advanced to enable the selective recognition and binding of target proteins by conjugation of their specific receptors to polyethylene oxide-functionalized nanotubes. This scheme, combined with the sensitivity of nanotube electronic devices, enables highly specific electronic sensors for detecting clinically important biomolecules such as antibodies associated with human autoimmune diseases.

  4. The evolutionary development of high specific impulse electric thruster technology

    NASA Technical Reports Server (NTRS)

    Sovey, James S.; Hamley, John A.; Patterson, Michael J.; Rawlin, Vincent K.; Myers, Roger M.

    1992-01-01

    Electric propulsion flight and technology demonstrations conducted primarily by Europe, Japan, China, the U.S., and the USSR are reviewed. Evolutionary mission applications for high specific impulse electric thruster systems are discussed, and the status of arcjet, ion, and magnetoplasmadynamic thrusters and associated power processor technologies are summarized.

  5. The evolutionary development of high specific impulse electric thruster technology

    SciTech Connect

    Sovey, J.S.; Hamley, J.A.; Patterson, M.J.; Rawlin, V.K.; Meyers, R.M.

    1992-03-01

    Electric propulsion flight and technology demonstrations conducted primarily by Europe, Japan, Peoples Republic of China, USA, and USSR are reviewed. Evolutionary mission applications for high specific impulse electric thruster systems are discussed, and the status of arcjet, ion, and magnetoplasmadynamic thruster and associated power processor technologies are summarized.

  6. A Ratio Test of Interrater Agreement with High Specificity

    ERIC Educational Resources Information Center

    Cousineau, Denis; Laurencelle, Louis

    2015-01-01

    Existing tests of interrater agreements have high statistical power; however, they lack specificity. If the ratings of the two raters do not show agreement but are not random, the current tests, some of which are based on Cohen's kappa, will often reject the null hypothesis, leading to the wrong conclusion that agreement is present. A new test of…

  7. Experiences with the hydraulic design of the high specific speed Francis turbine

    NASA Astrophysics Data System (ADS)

    Obrovsky, J.; Zouhar, J.

    2014-03-01

    The high specific speed Francis turbine is still suitable alternative for refurbishment of older hydro power plants with lower heads and worse cavitation conditions. In the paper the design process of such kind of turbine together with the results comparison of homological model tests performed in hydraulic laboratory of ČKD Blansko Engineering is introduced. The turbine runner was designed using the optimization algorithm and considering the high specific speed hydraulic profile. It means that hydraulic profiles of the spiral case, the distributor and the draft tube were used from a Kaplan turbine. The optimization was done as the automatic cycle and was based on a simplex optimization method as well as on a genetic algorithm. The number of blades is shown as the parameter which changes the resulting specific speed of the turbine between ns=425 to 455 together with the cavitation characteristics. Minimizing of cavitation on the blade surface as well as on the inlet edge of the runner blade was taken into account during the design process. The results of CFD analyses as well as the model tests are mentioned in the paper.

  8. Phase-unwrapping algorithm for images with high noise content based on a local histogram

    NASA Astrophysics Data System (ADS)

    Meneses, Jaime; Gharbi, Tijani; Humbert, Philippe

    2005-03-01

    We present a robust algorithm of phase unwrapping that was designed for use on phase images with high noise content. We proceed with the algorithm by first identifying regions with continuous phase values placed between fringe boundaries in an image and then phase shifting the regions with respect to one another by multiples of 2pi to unwrap the phase. Image pixels are segmented between interfringe and fringe boundary areas by use of a local histogram of a wrapped phase. The algorithm has been used successfully to unwrap phase images generated in a three-dimensional shape measurement for noninvasive quantification of human skin structure in dermatology, cosmetology, and plastic surgery.

  9. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study. PMID:26480397

  10. Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing

    NASA Astrophysics Data System (ADS)

    Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.

    2007-05-01

    uniquely powerful computing power of the MareNostrum supercomputer in Barcelona to realize the promise of RTM, incorporate it into daily processing flows, and to help solve exploration problems in a highly cost-effective way. Uniquely, the Kaleidoscope Project is simultaneously integrating software (algorithms) and hardware (Cell BE), steps that are traditionally taken sequentially. This unique integration of software and hardware will accelerate seismic imaging by several orders of magnitude compared to conventional solutions running on standard Linux Clusters.

  11. A novel algorithm combining oversampling and digital lock-in amplifier of high speed and precision

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhou, Mei; He, Feng; Lin, Ling

    2011-09-01

    Because of a large amount of arithmetic in the standard digital lock-in detection, a high performance processor is needed to implement the algorithm in real time. This paper presents a novel algorithm that integrates oversampling and high-speed lock-in detection. The algorithm sets the sampling frequency as a whole-number multiple of four of the input signal frequency, and then uses the common downsampling technology to lower the sampling frequency to four times of the input signal frequency. It could effectively remove the noise interference and improve the detection accuracy. After that the phase sensitive detector is implemented. It simply does the addition and subtraction on four points in the period of same phase and replaces almost all the multiplication operations to speed up digital lock-in detection calculation substantially. Furthermore, the correction factor is introduced to improve the calculation accuracy of the amplitude, and an error caused by the algorithm in theory can be eliminated completely. The results of the simulation and actual experiments show that the novel algorithm combining digital lock-in detection and oversampling not only has the high precision, but also has the unprecedented speed. In our work, the new algorithm is suitable for the real-time weak signal detection in the general microprocessor not just digital signal processor.

  12. Brief Report: exploratory analysis of the ADOS revised algorithm: specificity and predictive value with Hispanic children referred for autism spectrum disorders.

    PubMed

    Overton, Terry; Fielding, Cheryl; de Alba, Roman Garcia

    2008-07-01

    This study compared Autism diagnostic observation schedule (ADOS) algorithm scores of a sample of 26 children who were administered modules 1-3 of the ADOS with the scores obtained applying the revised ADOS algorithm proposed by Gotham et al. (2007). Results of this application were inconsistent, yielding slightly more accurate results for module 1. New algorithm scores on modules 2 and 3 remained consistent with the original algorithm scores. The Mann-Whitney U was applied to compare revised algorithm and clinical levels of social impairment to determine if significant differences were evident. Results of Mann-Whitney U analyses were inconsistent and demonstrated less specificity for children with milder levels of social impairment. The revised algorithm demonstrated accuracy for the more severe autistic group. PMID:18026872

  13. An end-to-end workflow for engineering of biological networks from high-level specifications.

    PubMed

    Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun

    2012-08-17

    We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells. PMID:23651286

  14. High-speed computation of the EM algorithm for PET image reconstruction

    SciTech Connect

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J. )

    1994-10-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution backprojection algorithms. However, two major drawbacks have impeded the routine use of the EM algorithm, namely, the long computational time due to slow convergence and the large memory required for the storage of the image, projection data and the probability matrix. In this study, the authors attempts to solve these two problems by parallelizing the EM algorithm on a multiprocessor system. The authors have implemented an extended hypercube (EH) architecture for the high-speed computation of the EM algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs). The authors discuss and compare the performance of the EM algorithm on a 386/387 machine, CD 4360 mainframe, and on the EH system. The results show that the computational speed performance of an EH using DSP chips as PEs executing the EM image reconstruction algorithm is about 130 times better than that of the CD 4360 mainframe. The EH topology is expandable with more number of PEs.

  15. High speed multiplier using Nikhilam Sutra algorithm of Vedic mathematics

    NASA Astrophysics Data System (ADS)

    Pradhan, Manoranjan; Panda, Rutuparna

    2014-03-01

    This article presents the design of a new high-speed multiplier architecture using Nikhilam Sutra of Vedic mathematics. The proposed multiplier architecture finds out the compliment of the large operand from its nearest base to perform the multiplication. The multiplication of two large operands is reduced to the multiplication of their compliments and addition. It is more efficient when the magnitudes of both operands are more than half of their maximum values. The carry save adder in the multiplier architecture increases the speed of addition of partial products. The multiplier circuit is synthesised and simulated using Xilinx ISE 10.1 software and implemented on Spartan 2 FPGA device XC2S30-5pq208. The output parameters such as propagation delay and device utilisation are calculated from synthesis results. The performance evaluation results in terms of speed and device utilisation are compared with earlier multiplier architecture. The proposed design has speed improvements compared to multiplier architecture presented in the literature.

  16. High Quality Typhoon Cloud Image Restoration by Combining Genetic Algorithm with Contourlet Transform

    SciTech Connect

    Zhang Changjiang; Wang Xiaodong

    2008-11-06

    An efficient typhoon cloud image restoration algorithm is proposed. Having implemented contourlet transform to a typhoon cloud image, noise is reduced in the high sub-bands. Weight median value filter is used to reduce the noise in the contourlet domain. Inverse contourlet transform is done to obtain the de-noising image. In order to enhance the global contrast of the typhoon cloud image, in-complete Beta transform (IBT) is used to determine non-linear gray transform curve so as to enhance global contrast for the de-noising typhoon cloud image. Genetic algorithm is used to obtain the optimal gray transform curve. Information entropy is used as the fitness function of the genetic algorithm. Experimental results show that the new algorithm is able to well enhance the global for the typhoon cloud image while well reducing the noises in the typhoon cloud image.

  17. Fast two-dimensional super-resolution image reconstruction algorithm for ultra-high emitter density.

    PubMed

    Huang, Jiaqing; Gumpper, Kristyn; Chi, Yuejie; Sun, Mingzhai; Ma, Jianjie

    2015-07-01

    Single-molecule localization microscopy achieves sub-diffraction-limit resolution by localizing a sparse subset of stochastically activated emitters in each frame. Its temporal resolution is limited by the maximal emitter density that can be handled by the image reconstruction algorithms. Multiple algorithms have been developed to accurately locate the emitters even when they have significant overlaps. Currently, compressive-sensing-based algorithm (CSSTORM) achieves the highest emitter density. However, CSSTORM is extremely computationally expensive, which limits its practical application. Here, we develop a new algorithm (MempSTORM) based on two-dimensional spectrum analysis. With the same localization accuracy and recall rate, MempSTORM is 100 times faster than CSSTORM with ℓ(1)-homotopy. In addition, MempSTORM can be implemented on a GPU for parallelism, which can further increase its computational speed and make it possible for online super-resolution reconstruction of high-density emitters. PMID:26125349

  18. Cooperative scheduling of imaging observation tasks for high-altitude airships based on propagation algorithm.

    PubMed

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  19. Cooperative Scheduling of Imaging Observation Tasks for High-Altitude Airships Based on Propagation Algorithm

    PubMed Central

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  20. Method of preparing high specific activity platinum-195m

    DOEpatents

    Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.

    2004-06-15

    A method of preparing high-specific-activity .sup.195m Pt includes the steps of: exposing .sup.193 Ir to a flux of neutrons sufficient to convert a portion of the .sup.193 Ir to .sup.195m Pt to form an irradiated material; dissolving the irradiated material to form an intermediate solution comprising Ir and Pt; and separating the Pt from the Ir by cation exchange chromatography to produce .sup.195m Pt.

  1. The evolutionary development of high specific impulse electric thruster technology

    NASA Technical Reports Server (NTRS)

    Sovey, James S.; Hamley, John A.; Patterson, Michael J.; Rawlin, Vincent K.; Myers, Roger M.

    1992-01-01

    Electric propulsion flight and technology demonstrations conducted in the USA, Europe, Japan, China, and USSR are reviewed with reference to the major flight qualified electric propulsion systems. These include resistojets, ion thrusters, ablative pulsed plasma thrusters, stationary plasma thrusters, pulsed magnetoplasmic thrusters, and arcjets. Evolutionary mission applications are presented for high specific impulse electric thruster systems. The current status of arcjet, ion, and magnetoplasmadynamic thrusters and their associated power processor technologies are summarized.

  2. The evolutionary development of high specific impulse electric thruster technology

    SciTech Connect

    Sovey, J.S.; Hamley, J.A.; Patterson, M.J.; Rawlin, V.K.; Myers, R.M. Sverdrup Technology, Inc., Brook Park, OH )

    1992-03-01

    Electric propulsion flight and technology demonstrations conducted in the USA, Europe, Japan, China, and USSR are reviewed with reference to the major flight qualified electric propulsion systems. These include resistojets, ion thrusters, ablative pulsed plasma thrusters, stationary plasma thrusters, pulsed magnetoplasmic thrusters, and arcjets. Evolutionary mission applications are presented for high specific impulse electric thruster systems. The current status of arcjet, ion, and magnetoplasmadynamic thrusters and their associated power processor technologies are summarized. 114 refs.

  3. Method for preparing high specific activity 177Lu

    SciTech Connect

    Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.

    2004-04-06

    A method of separating lutetium from a solution containing Lu and Yb, particularly reactor-produced .sup.177 Lu and .sup.177 Yb, includes the steps of: providing a chromatographic separation apparatus containing LN resin; loading the apparatus with a solution containing Lu and Yb; and eluting the apparatus to chromatographically separate the Lu and the Yb in order to produce high-specific-activity .sup.177 Yb.

  4. Evaluating three evapotranspiration mapping algorithms with lysimetric data in the semi-arid Texas High Plains

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Ground water levels are declining at unsustainable rates in the Texas High Plains. Accurate evapotranspiration (ET) maps would provide valuable information on regional crop water use and hydrology. This study evaluated three remote sensing based algorithms for estimating ET rates for the Texas High ...

  5. Mutation specific immunohistochemistry is highly specific for the presence of calreticulin mutations in myeloproliferative neoplasms.

    PubMed

    Andrici, Juliana; Farzin, Mahtab; Clarkson, Adele; Sioson, Loretta; Sheen, Amy; Watson, Nicole; Toon, Christopher W; Koleth, Mary; Stevenson, William; Gill, Anthony J

    2016-06-01

    The identification of somatic calreticulin (CALR) mutations can be used to confirm the diagnosis of a myeloproliferative disorder in Philadelphia chromosome-negative, JAK2 and MPL wild type patients with thrombocytosis. All pathogenic CALR mutations result in an identical C-terminal protein and therefore may be identifiable by immunohistochemistry. We sought to test the sensitivity and specificity of mutation specific immunohistochemistry for pathogenic CALR mutations using a commercially available mouse monoclonal antibody (clone CAL2). Immunohistochemistry for mutant calreticulin was performed on the most recent bone marrow trephine from a cohort of patients enriched for CALR mutations and compared to mutation testing performed by polymerase chain reaction (PCR) amplification followed by fragment length analysis. Twenty-nine patients underwent both immunohistochemistry and molecular testing. Eleven patients had CALR mutation, and immunohistochemistry was positive in nine (82%). One discrepant case appeared to represent genuine false negative immunohistochemistry. The other may be attributable to a 12 year delay between the bone marrow trephine and the specimen which underwent molecular testing, particularly because a liver biopsy performed at the same time as molecular testing demonstrated positive staining in megakaryocytes in extramedullary haematopoiesis. All 18 cases which lacked CALR mutation demonstrated negative staining. In this population enriched for CALR mutations, the specificity was 100%; sensitivity 82-91%, positive predictive value 100% and negative predictive value 90-95%. We conclude that mutation specific immunohistochemistry is highly specific for the presence of CALR mutations. Whilst it may not identify all mutations, it may be very valuable in routine clinical care. PMID:27114372

  6. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  7. High resolution, molecular-specific, reflectance imaging in optically dense tissue phantoms with structured-illumination

    NASA Astrophysics Data System (ADS)

    Tkaczyk, Tomasz S.; Rahman, Mohammed; Mack, Vivian; Sokolov, Konstantin; Rogers, Jeremy D.; Richards-Kortum, Rebecca; Descour, Michael R.

    2004-08-01

    Structured-illumination microscopy delivers confocal-imaging capabilities and may be used for optical sectioning in bio-imaging applications. However, previous structured-illumination implementations are not capable of imaging molecular changes within highly scattering, biological samples in reflectance mode. Here, we present two advances which enable successful structured illumination reflectance microscopy to image molecular changes in epithelial tissue phantoms. First, we present the sine approximation algorithm to improve the ability to reconstruct the in-focus plane when the out-of-focus light is much greater in magnitude. We characterize the dependencies of this algorithm on phase step error, random noise and backscattered out-of-focus contributions. Second, we utilize a molecular-specific reflectance contrast agent based on gold nanoparticles to label disease-related biomarkers and increase the signal and signal-to-noise ratio (SNR) in structured illumination microscopy of biological tissue. Imaging results for multi-layer epithelial cell phantoms with optical properties characteristic of normal and cancerous tissue labeled with nanoparticles targeted against the epidermal growth factor receptor (EGFR) are presented. Structured illumination images reconstructed with the sine approximation algorithm compare favorably to those obtained with a standard confocal microscope; this new technique can be implemented in simple and small imaging platforms for future clinical studies.

  8. Automatic, Real-Time Algorithms for Anomaly Detection in High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Srivastava, A. N.; Nemani, R. R.; Votava, P.

    2008-12-01

    Earth observing satellites are generating data at an unprecedented rate, surpassing almost all other data intensive applications. However, most of the data that arrives from the satellites is not analyzed directly. Rather, multiple scientific teams analyze only a small fraction of the total data available in the data stream. Although there are many reasons for this situation one paramount concern is developing algorithms and methods that can analyze the vast, high dimensional, streaming satellite images. This paper describes a new set of methods that are among the fastest available algorithms for real-time anomaly detection. These algorithms were built to maximize accuracy and speed for a variety of applications in fields outside of the earth sciences. However, our studies indicate that with appropriate modifications, these algorithms can be extremely valuable for identifying anomalies rapidly using only modest computational power. We review two algorithms which are used as benchmarks in the field: Orca, One-Class Support Vector Machines and discuss the anomalies that are discovered in MODIS data taken over the Central California region. We are especially interested in automatic identification of disturbances within the ecosystems (e,g, wildfires, droughts, floods, insect/pest damage, wind damage, logging). We show the scalability of the algorithms and demonstrate that with appropriately adapted technology, the dream of real-time analysis can be made a reality.

  9. High efficiency cell-specific targeting of cytokine activity

    NASA Astrophysics Data System (ADS)

    Garcin, Geneviève; Paul, Franciane; Staufenbiel, Markus; Bordat, Yann; van der Heyden, José; Wilmes, Stephan; Cartron, Guillaume; Apparailly, Florence; de Koker, Stefaan; Piehler, Jacob; Tavernier, Jan; Uzé, Gilles

    2014-01-01

    Systemic toxicity currently prevents exploiting the huge potential of many cytokines for medical applications. Here we present a novel strategy to engineer immunocytokines with very high targeting efficacies. The method lies in the use of mutants of toxic cytokines that markedly reduce their receptor-binding affinities, and that are thus rendered essentially inactive. Upon fusion to nanobodies specifically binding to marker proteins, activity of these cytokines is selectively restored for cell populations expressing this marker. This ‘activity-by-targeting’ concept was validated for type I interferons and leptin. In the case of interferon, activity can be directed to target cells in vitro and to selected cell populations in mice, with up to 1,000-fold increased specific activity. This targeting strategy holds promise to revitalize the clinical potential of many cytokines.

  10. Cellulose antibody films for highly specific evanescent wave immunosensors

    NASA Astrophysics Data System (ADS)

    Hartmann, Andreas; Bock, Daniel; Jaworek, Thomas; Kaul, Sepp; Schulze, Matthais; Tebbe, H.; Wegner, Gerhard; Seeger, Stefan

    1996-01-01

    For the production of recognition elements for evanescent wave immunosensors optical waveguides have to be coated with ultrathin stable antibody films. In the present work non amphiphilic alkylated cellulose and copolyglutamate films are tested as monolayer matrices for the antibody immobilization using the Langmuir-Blodgett technique. These films are transferred onto optical waveguides and serve as excellent matrices for the immobilization of antibodies in high density and specificity. In addition to the multi-step immobilization of immunoglobulin G(IgG) on photochemically crosslinked and oxidized polymer films, the direct one-step transfer of mixed antibody-polymer films is performed. Both planar waveguides and optical fibers are suitable substrates for the immobilization. The activity and specificity of immobilized antibodies is controlled by the enzyme-linked immunosorbent assay (ELISA) technique. As a result reduced non-specific interactions between antigens and the substrate surface are observed if cinnamoylbutyether-cellulose is used as the film matrix for the antibody immobilization. Using the evanescent wave senor (EWS) technology immunosensor assays are performed in order to determine both the non-specific adsorption of different coated polymethylmethacrylat (PMMA) fibers and the long-term stability of the antibody films. Specificities of one-step transferred IgG-cellulose films are drastically enhanced compared to IgG-copolyglutamate films. Cellulose IgG films are used in enzymatic sandwich assays using mucine as a clinical relevant antigen that is recognized by the antibodies BM2 and BM7. A mucine calibration measurement is recorded. So far the observed detection limit for mucine is about 8 ng/ml.

  11. A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas

    SciTech Connect

    Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q

    2007-04-18

    A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.

  12. A reconstruction algorithm based on sparse representation for Raman signal processing under high background noise

    NASA Astrophysics Data System (ADS)

    Fan, X.; Wang, X.; Wang, X.; Xu, Y.; Que, J.; He, H.; Wang, X.; Tang, M.

    2016-02-01

    Background noise is one of the main interference sources of the Raman spectroscopy measurement and imaging technique. In this paper, a sparse representation based algorithm is presented to process the Raman signals under high background noise. In contrast with the existing de-noising methods, the proposed method reconstructs the pure Raman signals by estimating the Raman peak information. The advantage of the proposed algorithm is its high anti-noise capacity and low pure Raman signal reduction contributed by its reconstruction principle. Meanwhile, the Batch-OMP algorithm is applied to accelerate the training of the sparse representation. Therefore, it is very suitable to be adopted in the Raman measurement or imaging instruments to observe fast dynamic processes where the scanning time has to be shortened and the signal-to-noise ratio (SNR) of the raw tested signal is reduced. In the simulation and experiment, the de-noising result obtained by the proposed algorithm was better than the traditional Savitzky-Golay (S-G) filter and the fixed-threshold wavelet de-noising algorithm.

  13. A fast and high performance multiple data integration algorithm for identifying human disease genes

    PubMed Central

    2015-01-01

    Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620

  14. A Monolithic Algorithm for High Reynolds Number Fluid-Structure Interaction Simulations

    NASA Astrophysics Data System (ADS)

    Lieberknecht, Erika; Sheldon, Jason; Pitt, Jonathan

    2013-11-01

    Simulations of fluid-structure interaction problems with high Reynolds number flows are typically approached with partitioned algorithms that leverage the robustness of traditional finite volume method based CFD techniques for flows of this nature. However, such partitioned algorithms are subject to many sub-iterations per simulation time-step, which substantially increases the computational cost when a tightly coupled solution is desired. To address this issue, we present a finite element method based monolithic algorithm for fluid-structure interaction problems with high Reynolds number flow. The use of a monolithic algorithm will potentially reduce the computational cost during each time-step, but requires that all of the governing equations be simultaneously cast in a single Arbitrary Lagrangian-Eulerian (ALE) frame of reference and subjected to the same discretization strategy. The formulation for the fluid solution is stabilized by implementing a Streamline Upwind Galerkin (SUPG) method, and a projection method for equal order interpolation of all of the solution unknowns; numerical and programming details are discussed. Preliminary convergence studies and numerical investigations are presented, to demonstrate the algorithm's robustness and performance. The authors acknowledge support for this project from the Applied Research Laboratory Eric Walker Graduate Fellowship Program.

  15. A new algorithm for generating highly accurate benchmark solutions to transport test problems

    SciTech Connect

    Azmy, Y.Y.

    1997-06-01

    We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.

  16. Design optimization of a high specific speed Francis turbine runner

    NASA Astrophysics Data System (ADS)

    Enomoto, Y.; Kurosawa, S.; Kawajiri, H.

    2012-11-01

    Francis turbine is used in many hydroelectric power stations. This paper presents the development of hydraulic performance in a high specific speed Francis turbine runner. In order to achieve the improvements of turbine efficiency throughout a wide operating range, a new runner design method which combines the latest Computational Fluid Dynamics (CFD) and a multi objective optimization method with an existing design system was applied in this study. The validity of the new design system was evaluated by model performance tests. As the results, it was confirmed that the optimized runner presented higher efficiency compared with an originally designed runner. Besides optimization of runner, instability vibration which occurred at high part load operating condition was investigated by model test and gas-liquid two-phase flow analysis. As the results, it was confirmed that the instability vibration was caused by oval cross section whirl which was caused by recirculation flow near runner cone wall.

  17. MS Amanda, a Universal Identification Algorithm Optimized for High Accuracy Tandem Mass Spectra

    PubMed Central

    2014-01-01

    Today’s highly accurate spectra provided by modern tandem mass spectrometers offer considerable advantages for the analysis of proteomic samples of increased complexity. Among other factors, the quantity of reliably identified peptides is considerably influenced by the peptide identification algorithm. While most widely used search engines were developed when high-resolution mass spectrometry data were not readily available for fragment ion masses, we have designed a scoring algorithm particularly suitable for high mass accuracy. Our algorithm, MS Amanda, is generally applicable to HCD, ETD, and CID fragmentation type data. The algorithm confidently explains more spectra at the same false discovery rate than Mascot or SEQUEST on examined high mass accuracy data sets, with excellent overlap and identical peptide sequence identification for most spectra also explained by Mascot or SEQUEST. MS Amanda, available at http://ms.imp.ac.at/?goto=msamanda, is provided free of charge both as standalone version for integration into custom workflows and as a plugin for the Proteome Discoverer platform. PMID:24909410

  18. CONTUR: A FORTRAN ALGORITHM FOR TWO-DIMENSIONAL HIGH-QUALITY CONTOURING

    EPA Science Inventory

    The contouring algorithm described allows one to produce high-quality two-dimensional contour diagrams from values of a dependent variable located on a uniform grid system (i.e., spacing of nodal points in x and y directions is constant). Computer subroutines (supplied) were deve...

  19. Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori

    2014-01-01

    Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…

  20. High-precision position-specific isotope analysis

    PubMed Central

    Corso, Thomas N.; Brenna, J. Thomas

    1997-01-01

    Intramolecular carbon isotope distributions reflect details of the origin of organic compounds and may record the status of complex systems, such as environmental or physiological states. A strategy is reported here for high-precision determination of 13C/12C ratios at specific positions in organic compounds separated from complex mixtures. Free radical fragmentation of methyl palmitate, a test compound, is induced by an open tube furnace. Two series of peaks corresponding to bond breaking from each end of the molecule are analyzed by isotope ratio mass spectrometry and yield precisions of SD(δ-13C) < 0.4‰. Isotope labeling in the carboxyl, terminal, and methyl positions demonstrates the absence of rearrangement during activation and fragmentation. Negligible isotopic fractionation was observed as degree of fragmentation was adjusted by changing pyrolysis temperature. [1-13C]methyl palmitate with overall δ-13C = 4.06‰, yielded values of +457‰ for the carboxyl position, in agreement with expectations from the dilution, and an average of −27.95‰ for the rest of the molecule, corresponding to −27.46‰ for the olefin series. These data demonstrate the feasibility of automated high-precision position-specific analysis of carbon for molecules contained in complex mixtures. PMID:11038597

  1. Efficiency Analysis of a High-Specific Impulse Hall Thruster

    NASA Technical Reports Server (NTRS)

    Jacobson, David (Technical Monitor); Hofer, Richard R.; Gallimore, Alec D.

    2004-01-01

    Performance and plasma measurements of the high-specific impulse NASA-173Mv2 Hall thruster were analyzed using a phenomenological performance model that accounts for a partially-ionized plasma containing multiply-charged ions. Between discharge voltages of 300 to 900 V, the results showed that although the net decrease of efficiency due to multiply-charged ions was only 1.5 to 3.0 percent, the effects of multiply-charged ions on the ion and electron currents could not be neglected. Between 300 to 900 V, the increase of the discharge current was attributed to the increasing fraction of multiply-charged ions, while the maximum deviation of the electron current from its average value was only +5/-14 percent. These findings revealed how efficient operation at high-specific impulse was enabled through the regulation of the electron current with the applied magnetic field. Between 300 to 900 V, the voltage utilization ranged from 89 to 97 percent, the mass utilization from 86 to 90 percent, and the current utilization from 77 to 81 percent. Therefore, the anode efficiency was largely determined by the current utilization. The electron Hall parameter was nearly constant with voltage, decreasing from an average of 210 at 300 V to an average of 160 between 400 to 900 V. These results confirmed our claim that efficient operation can be achieved only over a limited range of Hall parameters.

  2. A quasi-Newton acceleration for high-dimensional optimization algorithms

    PubMed Central

    Alexander, David; Lange, Kenneth

    2010-01-01

    In many statistical problems, maximum likelihood estimation by an EM or MM algorithm suffers from excruciatingly slow convergence. This tendency limits the application of these algorithms to modern high-dimensional problems in data mining, genomics, and imaging. Unfortunately, most existing acceleration techniques are ill-suited to complicated models involving large numbers of parameters. The squared iterative methods (SQUAREM) recently proposed by Varadhan and Roland constitute one notable exception. This paper presents a new quasi-Newton acceleration scheme that requires only modest increments in computation per iteration and overall storage and rivals or surpasses the performance of SQUAREM on several representative test problems. PMID:21359052

  3. Application of a finite element algorithm for high speed viscous flows using structured and unstructured meshes

    NASA Technical Reports Server (NTRS)

    Vemaganti, Gururaja R.; Wieting, Allan R.

    1990-01-01

    A higher-order streamline upwinding Petrov-Galerkin finite element method is employed for high speed viscous flow analysis using structured and unstructured meshes. For a Mach 8.03 shock interference problem, successive mesh adaptation was performed using an adaptive remeshing method. Results from the finite element algorithm compare well with both experimental data and results from an upwind cell-centered method. Finite element results for a Mach 14.1 flow over a 24 degree compression corner compare well with experimental data and two other numerical algorithms for both structured and unstructured meshes.

  4. High-accuracy spectral reduction algorithm for the échelle spectrometer.

    PubMed

    Yin, Lu; Bayanheshig; Yang, Jin; Lu, Yuxian; Zhang, Rui; Sun, Ci; Cui, Jicheng

    2016-05-01

    A spectral reduction algorithm for an échelle spectrometer with spherical mirrors that builds a one-to-one correspondence between the wavelength and pixel position is proposed. The algorithm accuracy is improved by calculating the offset distance of the principal ray from the center of the image plane in the two-dimensional vertical direction and compensating the spectral line bending from the reflecting prism. The simulation and experimental results verify that the maximum deviation of the entire image plane is less than one pixel. This algorithm ensures that the wavelengths calculated from spectrograms have a high spectral resolution, meaning the precision from the spectral analysis reaches engineering standards of practice. PMID:27140373

  5. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  6. Solution of the hydrodynamic device model using high-order non-oscillatory shock capturing algorithms

    NASA Technical Reports Server (NTRS)

    Fatemi, Emad; Jerome, Joseph; Osher, Stanley

    1989-01-01

    A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.

  7. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms.

    PubMed

    Wang, C L

    2016-05-01

    Three high-resolution positioning methods based on the FluoroBancroft linear-algebraic method [S. B. Andersson, Opt. Express 16, 18714 (2008)] are proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function, the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. After taking the super-Poissonian photon noise into account, the proposed algorithms give an average of 0.03-0.08 pixel position error much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. These improvements will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis. PMID:27250410

  8. Representation of high frequency Space Shuttle data by ARMA algorithms and random response spectra

    NASA Technical Reports Server (NTRS)

    Spanos, P. D.; Mushung, L. J.

    1990-01-01

    High frequency Space Shuttle lift-off data are treated by autoregressive (AR) and autoregressive-moving-average (ARMA) digital algorithms. These algorithms provide useful information on the spectral densities of the data. Further, they yield spectral models which lend themselves to incorporation to the concept of the random response spectrum. This concept yields a reasonably smooth power spectrum for the design of structural and mechanical systems when the available data bank is limited. Due to the non-stationarity of the lift-off event, the pertinent data are split into three slices. Each of the slices is associated with a rather distinguishable phase of the lift-off event, where stationarity can be expected. The presented results are rather preliminary in nature; it is aimed to call attention to the availability of the discussed digital algorithms and to the need to augment the Space Shuttle data bank as more flights are completed.

  9. A high fuel consumption efficiency management scheme for PHEVs using an adaptive genetic algorithm.

    PubMed

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  10. Algorithm of lithography advanced process control system for high-mix low-volume products

    NASA Astrophysics Data System (ADS)

    Kawamura, Eiichi

    2007-03-01

    We have proposed a new algorithm of Lithography Advanced Process Control System for high-mix low-volume production. This algorithm works well for 1 st lot of a new device input into the production line, or 1st lot of an existing device to be exposed with a newly introduced exposure tool. The algorithm consists of 1) searching the most suitable trend of other similar devices referring to an attribute table and a look-up table for priority of searching order, and 2) correction of differences between the two devices for deciding optimum exposure conditions. The attribute table categorizes same layers across different devices and similar layers within a device. Look-up table describes the order of searching keys. To attain cost-effective process control system, information useful to compensate referred trend is compiled into the database.

  11. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms

    DOE PAGESBeta

    Wang, C. L.

    2016-05-17

    On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methodswere proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover,more » these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less

  12. An infrared small target detection algorithm based on high-speed local contrast method

    NASA Astrophysics Data System (ADS)

    Cui, Zheng; Yang, Jingli; Jiang, Shouda; Li, Junbao

    2016-05-01

    Small-target detection in infrared imagery with a complex background is always an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate, and speed. However, current algorithms usually improve one or two of the detection capabilities while sacrificing the other. In this letter, an Infrared (IR) small target detection algorithm with two layers inspired by Human Visual System (HVS) is proposed to balance those detection capabilities. The first layer uses high speed simplified local contrast method to select significant information. And the second layer uses machine learning classifier to separate targets from background clutters. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.

  13. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  14. A high throughput architecture for a low complexity soft-output demapping algorithm

    NASA Astrophysics Data System (ADS)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  15. A Very-High-Specific-Impulse Relativistic Laser Thruster

    SciTech Connect

    Horisawa, Hideyuki; Kimura, Itsuro

    2008-04-28

    Characteristics of compact laser plasma accelerators utilizing high-power laser and thin-target interaction were reviewed as a potential candidate of future spacecraft thrusters capable of generating relativistic plasma beams for interstellar missions. Based on the special theory of relativity, motion of the relativistic plasma beam exhausted from the thruster was formulated. Relationships of thrust, specific impulse, input power and momentum coupling coefficient for the relativistic plasma thruster were derived. It was shown that under relativistic conditions, the thrust could be extremely large even with a small amount of propellant flow rate. Moreover, it was shown that for a given value of input power thrust tended to approach the value of the photon rocket under the relativistic conditions regardless of the propellant flow rate.

  16. High Specificity in CheR Methyltransferase Function

    PubMed Central

    García-Fontana, Cristina; Reyes-Darias, José Antonio; Muñoz-Martínez, Francisco; Alfonso, Carlos; Morel, Bertrand; Ramos, Juan Luis; Krell, Tino

    2013-01-01

    Chemosensory pathways are a major signal transduction mechanism in bacteria. CheR methyltransferases catalyze the methylation of the cytosolic signaling domain of chemoreceptors and are among the core proteins of chemosensory cascades. These enzymes have primarily been studied Escherichia coli and Salmonella typhimurium, which possess a single CheR involved in chemotaxis. Many other bacteria possess multiple cheR genes. Because the sequences of chemoreceptor signaling domains are highly conserved, it remains to be established with what degree of specificity CheR paralogues exert their activity. We report here a comparative analysis of the three CheR paralogues of Pseudomonas putida. Isothermal titration calorimetry studies show that these paralogues bind the product of the methylation reaction, S-adenosylhomocysteine, with much higher affinity (KD of 0.14–2.2 μm) than the substrate S-adenosylmethionine (KD of 22–43 μm), which indicates product feedback inhibition. Product binding was particularly tight for CheR2. Analytical ultracentrifugation experiments demonstrate that CheR2 is monomeric in the absence and presence of S-adenosylmethionine or S-adenosylhomocysteine. Methylation assays show that CheR2, but not the other paralogues, methylates the McpS and McpT chemotaxis receptors. The mutant in CheR2 was deficient in chemotaxis, whereas mutation of CheR1 and CheR3 had either no or little effect on chemotaxis. In contrast, biofilm formation of the CheR1 mutant was largely impaired but not affected in the other mutants. We conclude that CheR2 forms part of a chemotaxis pathway, and CheR1 forms part of a chemosensory route that controls biofilm formation. Data suggest that CheR methyltransferases act with high specificity on their cognate chemoreceptors. PMID:23677992

  17. Teaching a Machine to Feel Postoperative Pain: Combining High-Dimensional Clinical Data with Machine Learning Algorithms to Forecast Acute Postoperative Pain

    PubMed Central

    Tighe, Patrick J.; Harle, Christopher A.; Hurley, Robert W.; Aytug, Haldun; Boezaart, Andre P.; Fillingim, Roger B.

    2015-01-01

    Background Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Methods Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor, with logistic regression included for baseline comparison. Results In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-nearest neighbor algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Conclusions Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. PMID:26031220

  18. Identification by ultrasound evaluation of the carotid and femoral arteries of high-risk subjects missed by three validated cardiovascular disease risk algorithms.

    PubMed

    Postley, John E; Luo, Yanting; Wong, Nathan D; Gardin, Julius M

    2015-11-15

    Atherosclerotic cardiovascular disease (ASCVD) events are the leading cause of death in the United States and globally. Traditional global risk algorithms may miss 50% of patients who experience ASCVD events. Noninvasive ultrasound evaluation of the carotid and femoral arteries can identify subjects at high risk for ASCVD events. We examined the ability of different global risk algorithms to identify subjects with femoral and/or carotid plaques found by ultrasound. The study population consisted of 1,464 asymptomatic adults (39.8% women) aged 23 to 87 years without previous evidence of ASCVD who had ultrasound evaluation of the carotid and femoral arteries. Three ASCVD risk algorithms (10-year Framingham Risk Score [FRS], 30-year FRS, and lifetime risk) were compared for the 939 subjects who met the algorithm age criteria. The frequency of femoral plaque as the only plaque was 18.3% in the total group and 14.8% in the risk algorithm groups (n = 939) without a significant difference between genders in frequency of femoral plaque as the only plaque. Those identified as high risk by the lifetime risk algorithm included the most men and women who had plaques either femoral or carotid (59% and 55%) but had lower specificity because the proportion of subjects who actually had plaques in the high-risk group was lower (50% and 35%) than in those at high risk defined by the FRS algorithms. In conclusion, ultrasound evaluation of the carotid and femoral arteries can identify subjects at risk of ASCVD events missed by traditional risk-predicting algorithms. The large proportion of subjects with femoral plaque only supports the use of including both femoral and carotid arteries in ultrasound evaluation. PMID:26434511

  19. High-resolution climate data over conterminous US using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Hashimoto, H.; Nemani, R. R.; Wang, W.

    2014-12-01

    We developed a new methodology to create high-resolution precipitation data using the random forest algorithm. We have used two approaches: physical downscaling from GCM data using a regional climate model, and interpolation from ground observation data. Physical downscaling method can be applied only for a small region because it is computationally expensive and complex to deploy. On the other hand, interpolation schemes from ground observations do not consider physical processes. In this study, we utilized the random forest algorithm to integrate atmospheric reanalysis data, satellite data, topography data, and ground observation data. First we considered situations where precipitation is same across the domain, largely dominated by storm like systems. We then picked several points to train random forest algorithm. The random forest algorithm estimates out-of-bag errors spatially, and produces the relative importance of each of the input variable.This methodology has the following advantages. (1) The methodology can ingest any spatial dataset to improve downscaling. Even non-precipitation datasets can be ingested such as satellite cloud cover data, radar reflectivity image, or modeled convective available potential energy. (2) The methodology is purely statistical so that physical assumptions are not required. Meanwhile, most of interpolation schemes assume empirical relationship between precipitation and elevation for orographic precipitation. (3) Low quality value in ingested data does not cause critical bias in the results because of the ensemble feature of random forest. Therefore, users do not need to pay a special attention to quality control of input data compared to other interpolation methodologies. (4) Same methodology can be applied to produce other high-resolution climate datasets, such as wind and cloud cover. Those variables are usually hard to be interpolated by conventional algorithms. In conclusion, the proposed methodology can produce reasonable

  20. High-dimensional propensity score algorithm in comparative effectiveness research with time-varying interventions.

    PubMed

    Neugebauer, Romain; Schmittdiel, Julie A; Zhu, Zheng; Rassen, Jeremy A; Seeger, John D; Schneeweiss, Sebastian

    2015-02-28

    The high-dimensional propensity score (hdPS) algorithm was proposed for automation of confounding adjustment in problems involving large healthcare databases. It has been evaluated in comparative effectiveness research (CER) with point treatments to handle baseline confounding through matching or covariance adjustment on the hdPS. In observational studies with time-varying interventions, such hdPS approaches are often inadequate to handle time-dependent confounding and selection bias. Inverse probability weighting (IPW) estimation to fit marginal structural models can adequately handle these biases under the fundamental assumption of no unmeasured confounders. Upholding of this assumption relies on the selection of an adequate set of covariates for bias adjustment. We describe the application and performance of the hdPS algorithm to improve covariate selection in CER with time-varying interventions based on IPW estimation and explore stabilization of the resulting estimates using Super Learning. The evaluation is based on both the analysis of electronic health records data in a real-world CER study of adults with type 2 diabetes and a simulation study. This report (i) establishes the feasibility of IPW estimation with the hdPS algorithm based on large electronic health records databases, (ii) demonstrates little impact on inferences when supplementing the set of expert-selected covariates using the hdPS algorithm in a setting with extensive background knowledge, (iii) supports the application of the hdPS algorithm in discovery settings with little background knowledge or limited data availability, and (iv) motivates the application of Super Learning to stabilize effect estimates based on the hdPS algorithm. PMID:25488047

  1. Nanoporous ultra-high specific surface inorganic fibres

    NASA Astrophysics Data System (ADS)

    Kanehata, Masaki; Ding, Bin; Shiratori, Seimei

    2007-08-01

    Nanoporous inorganic (silica) nanofibres with ultra-high specific surface have been fabricated by electrospinning the blend solutions of poly(vinyl alcohol) (PVA) and colloidal silica nanoparticles, followed by selective removal of the PVA component. The configurations of the composite and inorganic nanofibres were investigated by changing the average silica particle diameters and the concentrations of colloidal silica particles in polymer solutions. After the removal of PVA by calcination, the fibre shape of pure silica particle assembly was maintained. The nanoporous silica fibres were assembled as a porous membrane with a high surface roughness. From the results of Brunauer-Emmett-Teller (BET) measurements, the BET surface area of inorganic silica nanofibrous membranes was increased with the decrease of the particle diameters. The membrane composed of silica particles with diameters of 15 nm showed the largest BET surface area of 270.3 m2 g-1 and total pore volume of 0.66 cm3 g-1. The physical absorption of methylene blue dye molecules by nanoporous silica membranes was examined using UV-vis spectrometry. Additionally, the porous silica membranes modified with fluoroalkylsilane showed super-hydrophobicity due to their porous structures.

  2. Plasmoid Thruster for High Specific-Impulse Propulsion

    NASA Technical Reports Server (NTRS)

    Fimognari, Peter; Eskridge, Richard; Martin, Adam; Lee, Michael

    2007-01-01

    A report discusses a new multi-turn, multi-lead design for the first generation PT-1 (Plasmoid Thruster) that produces thrust by expelling plasmas with embedded magnetic fields (plasmoids) at high velocities. This thruster is completely electrodeless, capable of using in-situ resources, and offers efficiencies as high as 70 percent at a specific impulse, I(sub sp), of up to 8,000 s. This unit consists of drive and bias coils wound around a ceramic form, and the capacitor bank and switches are an integral part of the assembly. Multiple thrusters may be gauged to inductively recapture unused energy to boost efficiency and to increase the repetition rate, which, in turn increases the average thrust of the system. The thruster assembly can use storable propellants such as H2O, ammonia, and NO, among others. Any available propellant gases can be used to produce an I(sub sp) in the range of 2,000 to 8,000 s with a single-stage thruster. These capabilities will allow the transport of greater payloads to outer planets, especially in the case of an I(sub sp) greater than 6,000 s.

  3. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    NASA Astrophysics Data System (ADS)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.

  4. Structure of a highly NADP+-specific isocitrate dehydrogenase.

    PubMed

    Sidhu, Navdeep S; Delbaere, Louis T J; Sheldrick, George M

    2011-10-01

    Isocitrate dehydrogenase catalyzes the first oxidative and decarboxylation steps in the citric acid cycle. It also lies at a crucial bifurcation point between CO2-generating steps in the cycle and carbon-conserving steps in the glyoxylate bypass. Hence, the enzyme is a focus of regulation. The bacterial enzyme is typically dependent on the coenzyme nicotinamide adenine dinucleotide phosphate. The monomeric enzyme from Corynebacterium glutamicum is highly specific towards this coenzyme and the substrate isocitrate while retaining a high overall efficiency. Here, a 1.9 Å resolution crystal structure of the enzyme in complex with its coenzyme and the cofactor Mg2+ is reported. Coenzyme specificity is mediated by interactions with the negatively charged 2'-phosphate group, which is surrounded by the side chains of two arginines, one histidine and, via a water, one lysine residue, forming ion pairs and hydrogen bonds. Comparison with a previous apoenzyme structure indicates that the binding site is essentially preconfigured for coenzyme binding. In a second enzyme molecule in the asymmetric unit negatively charged aspartate and glutamate residues from a symmetry-related enzyme molecule interact with the positively charged arginines, abolishing coenzyme binding. The holoenzyme from C. glutamicum displays a 36° interdomain hinge-opening movement relative to the only previous holoenzyme structure of the monomeric enzyme: that from Azotobacter vinelandii. As a result, the active site is not blocked by the bound coenzyme as in the closed conformation of the latter, but is accessible to the substrate isocitrate. However, the substrate-binding site is disrupted in the open conformation. Hinge points could be pinpointed for the two molecules in the same crystal, which show a 13° hinge-bending movement relative to each other. One of the two pairs of hinge residues is intimately flanked on both sides by the isocitrate-binding site. This suggests that binding of a relatively

  5. A Jitter-Mitigating High Gain Antenna Pointing Algorithm for the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia; Blaurock, Carl

    2007-01-01

    This paper details a High Gain Antenna (HGA) pointing algorithm which mitigates jitter during the motion of the antennas on the Solar Dynamics Observatory (SDO) spacecraft. SDO has two HGAs which point towards the Earth and send data to a ground station at a high rate. These antennas are required to track the ground station during the spacecraft Inertial and Science modes, which include periods of inertial Sunpointing as well as calibration slews. The HGAs also experience handoff seasons, where the antennas trade off between pointing at the ground station and pointing away from the Earth. The science instruments on SDO require fine Sun pointing and have a very low jitter tolerance. Analysis showed that the nominal tracking and slewing motions of the antennas cause enough jitter to exceed the HGA portion of the jitter budget. The HGA pointing control algorithm was expanded from its original form as a means to mitigate the jitter.

  6. Closed loop, DM diversity-based, wavefront correction algorithm for high contrast imaging systems.

    PubMed

    Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy

    2007-09-17

    High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(-10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling. PMID:19547602

  7. Supercomputer implementation of finite element algorithms for high speed compressible flows

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Ramakrishnan, R.

    1986-01-01

    Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes.

  8. Algorithm for Automatic Behavior Quantification of Laboratory Mice Using High-Frame-Rate Videos

    NASA Astrophysics Data System (ADS)

    Nie, Yuman; Takaki, Takeshi; Ishii, Idaku; Matsuda, Hiroshi

    In this paper, we propose an algorithm for automatic behavior quantification in laboratory mice to quantify several model behaviors. The algorithm can detect repetitive motions of the fore- or hind-limbs at several or dozens of hertz, which are too rapid for the naked eye, from high-frame-rate video images. Multiple repetitive motions can always be identified from periodic frame-differential image features in four segmented regions — the head, left side, right side, and tail. Even when a mouse changes its posture and orientation relative to the camera, these features can still be extracted from the shift- and orientation-invariant shape of the mouse silhouette by using the polar coordinate system and adjusting the angle coordinate according to the head and tail positions. The effectiveness of the algorithm is evaluated by analyzing long-term 240-fps videos of four laboratory mice for six typical model behaviors: moving, rearing, immobility, head grooming, left-side scratching, and right-side scratching. The time durations for the model behaviors determined by the algorithm have detection/correction ratios greater than 80% for all the model behaviors. This shows good quantification results for actual animal testing.

  9. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun

    2013-04-01

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  10. Parallel technology for numerical modeling of fluid dynamics problems by high-accuracy algorithms

    NASA Astrophysics Data System (ADS)

    Gorobets, A. V.

    2015-04-01

    A parallel computation technology for modeling fluid dynamics problems by finite-volume and finite-difference methods of high accuracy is presented. The development of an algorithm, the design of a software implementation, and the creation of parallel programs for computations on large-scale computing systems are considered. The presented parallel technology is based on a multilevel parallel model combining various types of parallelism: with shared and distributed memory and with multiple and single instruction streams to multiple data flows.

  11. Connected Component Labeling algorithm for very complex and high-resolution images on an FPGA platform

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Huber, Felix

    2015-10-01

    Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded

  12. Streptococcal C5a peptidase is a highly specific endopeptidase.

    PubMed Central

    Cleary, P P; Prahbu, U; Dale, J B; Wexler, D E; Handley, J

    1992-01-01

    Compositional analysis of streptococcal C5a peptidase (SCPA) cleavage products from a synthetic peptide corresponding to the 20 C-terminal residues of C5a demonstrated that the target cleavage site is His-Lys rather than Lys-Asp, as previously suggested. A C5a peptide analog with Lys replaced by Gln was also subject to cleavage by SCPA. This confirmed that His-Lys rather than Lys-Asp is the scissile bond. Cleavage at histidine is unusual but is the same as that suggested for a peptidase produced by group B streptococci. Native C5 protein was also resistant to SCPA, suggesting that the His-Lys bond is inaccessible prior to proteolytic cleavage by C5 convertase. These experiments showed that the streptococcal C5a peptidase is highly specific for C5a and suggest that its function is not merely to process protein for metabolic consumption but to act primarily to eliminate this chemotactic signal from inflammatory foci. Images PMID:1452354

  13. MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Hao; Li, Na; Xu, Shiyou; Chen, Zengping

    2014-10-01

    Migration through resolution cells (MTRC) is generated in high-resolution inverse synthetic aperture radar (ISAR) imaging. A MTRC compensation algorithm for high-resolution ISAR imaging based on improved polar format algorithm (PFA) is proposed in this paper. Firstly, in the situation that a rigid-body target stably flies, the initial value of the rotation angle and center of the target is obtained from the rotation of radar line of sight (RLOS) and high range resolution profile (HRRP). Then, the PFA is iteratively applied to the echo data to search the optimization solution based on minimum entropy criterion. The procedure starts with the estimated initial rotation angle and center, and terminated when the entropy of the compensated ISAR image is minimized. To reduce the computational load, the 2-D iterative search is divided into two 1-D search. One is carried along the rotation angle and the other one is carried along rotation center. Each of the 1-D searches is realized by using of the golden section search method. The accurate rotation angle and center can be obtained when the iterative search terminates. Finally, apply the PFA to compensate the MTRC by the use of the obtained optimized rotation angle and center. After MTRC compensation, the ISAR image can be best focused. Simulated and real data demonstrate the effectiveness and robustness of the proposed algorithm.

  14. Figuring algorithm for high-gradient mirrors with axis-symmetrical removal function.

    PubMed

    Jiao, Changjun; Li, Shengyi; Xie, Xuhui; Chen, Shanyong; Wu, Dongliang; Kang, Nianhui

    2010-02-01

    Figuring technologies based on intracone and intercone stitching for high-gradient mirrors are discussed. Based on the conventional computer-controlled optics shaping principle, a process model for a single cone with intracone stitching is constructed. With the circular stitching property of the model, a modified Bayesian-based Richardson-Lucy (RL) algorithm is deduced to deconvolute dwell time for single cone. Building on this algorithm, with the introduction of intercone stitching, a process model for a complex cone is built. Then another modified Bayesian-based RL algorithm is deduced to deconvolute the dwell time for a complex cone from the properties of intracone stitching and intercone stitching. With a velocity realization method for dwell time on a spiral path of the cone and the determination criterion of the path parameter, figuring technologies for single and complex cones are presented. Simulation and experiment demonstrate that theories and methods discussed can solve key problems of figuring high-gradient mirrors; the figuring technologies are novel methods for high-gradient mirrors and can be used to figure mirrors finely. PMID:20119004

  15. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals. PMID:26406525

  16. High voltage and high specific capacity dual intercalating electrode Li-ion batteries

    NASA Technical Reports Server (NTRS)

    West, William C. (Inventor); Blanco, Mario (Inventor)

    2010-01-01

    The present invention provides high capacity and high voltage Li-ion batteries that have a carbonaceous cathode and a nonaqueous electrolyte solution comprising LiF salt and an anion receptor that binds the fluoride ion. The batteries can comprise dual intercalating electrode Li ion batteries. Methods of the present invention use a cathode and electrode pair, wherein each of the electrodes reversibly intercalate ions provided by a LiF salt to make a high voltage and high specific capacity dual intercalating electrode Li-ion battery. The present methods and systems provide high-capacity batteries particularly useful in powering devices where minimizing battery mass is important.

  17. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  18. Trajectory Specification for High-Capacity Air Traffic Control

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.

    2004-01-01

    In the current air traffic management system, the fundamental limitation on airspace capacity is the cognitive ability of human air traffic controllers to maintain safe separation with high reliability. The doubling or tripling of airspace capacity that will be needed over the next couple of decades will require that tactical separation be at least partially automated. Standardized conflict-free four-dimensional trajectory assignment will be needed to accomplish that objective. A trajectory specification format based on the Extensible Markup Language is proposed for that purpose. This format can be used to downlink a trajectory request, which can then be checked on the ground for conflicts and approved or modified, if necessary, then uplinked as the assigned trajectory. The horizontal path is specified as a series of geodetic waypoints connected by great circles, and the great-circle segments are connected by turns of specified radius. Vertical profiles for climb and descent are specified as low-order polynomial functions of along-track position, which is itself specified as a function of time. Flight technical error tolerances in the along-track, cross-track, and vertical axes define a bounding space around the reference trajectory, and conformance will guarantee the required separation for a period of time known as the conflict time horizon. An important safety benefit of this regimen is that the traffic will be able to fly free of conflicts for at least several minutes even if all ground systems and the entire communication infrastructure fail. Periodic updates in the along-track axis will adjust for errors in the predicted along-track winds.

  19. A High Precision Position Sensor Design and Its Signal Processing Algorithm for a Maglev Train

    PubMed Central

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582

  20. Genetic algorithm-support vector regression for high reliability SHM system based on FBG sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, XiaoLi; Liang, DaKai; Zeng, Jie; Asundi, Anand

    2012-02-01

    Structural Health Monitoring (SHM) based on fiber Bragg grating (FBG) sensor network has attracted considerable attention in recent years. However, FBG sensor network is embedded or glued in the structure simply with series or parallel. In this case, if optic fiber sensors or fiber nodes fail, the fiber sensors cannot be sensed behind the failure point. Therefore, for improving the survivability of the FBG-based sensor system in the SHM, it is necessary to build high reliability FBG sensor network for the SHM engineering application. In this study, a model reconstruction soft computing recognition algorithm based on genetic algorithm-support vector regression (GA-SVR) is proposed to achieve the reliability of the FBG-based sensor system. Furthermore, an 8-point FBG sensor system is experimented in an aircraft wing box. The external loading damage position prediction is an important subject for SHM system; as an example, different failure modes are selected to demonstrate the SHM system's survivability of the FBG-based sensor network. Simultaneously, the results are compared with the non-reconstruct model based on GA-SVR in each failure mode. Results show that the proposed model reconstruction algorithm based on GA-SVR can still keep the predicting precision when partial sensors failure in the SHM system; thus a highly reliable sensor network for the SHM system is facilitated without introducing extra component and noise.

  1. A high precision position sensor design and its signal processing algorithm for a maglev train.

    PubMed

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582

  2. Genetic algorithm based system for patient scheduling in highly constrained situations.

    PubMed

    Podgorelec, V; Kokol, P

    1997-12-01

    In medicine and health care there are a lot of situations when patients have to be scheduled on different devices and/or with different physicians or therapists. It may concern preventive examinations, laboratory tests or convalescent therapies, therefore we are always looking for an optimal schedule that would result in finishing all the activities scheduled as soon as possible, with the least patient waiting time and maximum device utilization. Since patient scheduling is a highly complex problem, it is impossible to make a qualitative schedule by hand or even with exact heuristic methods. Therefore we developed a powerful automated scheduling method for highly constrained situations based on genetic algorithms and machine learning. In this paper we present the method, together with the whole process of schedule generation, the important parameters to direct the evolution and how the algorithm is guaranteed to produce only feasible solutions, not breaking any of the required constraints. We applied the described method to a problem of scheduling patients with different therapy needs to a limited number of therapeutic devices, but the algorithm can be easily modified for use in similar situations. The results are quite encouraging and since all the solutions are feasible, the method can be easily incorporated into an interactive user interface, which can be of major importance when scheduling patients, and human resources in general, is considered. PMID:9555628

  3. A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen

    2016-04-01

    Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.

  4. A fast high-order finite difference algorithm for pricing American options

    NASA Astrophysics Data System (ADS)

    Tangman, D. Y.; Gopaul, A.; Bhuruth, M.

    2008-12-01

    We describe an improvement of Han and Wu's algorithm [H. Han, X.Wu, A fast numerical method for the Black-Scholes equation of American options, SIAM J. Numer. Anal. 41 (6) (2003) 2081-2095] for American options. A high-order optimal compact scheme is used to discretise the transformed Black-Scholes PDE under a singularity separating framework. A more accurate free boundary location based on the smooth pasting condition and the use of a non-uniform grid with a modified tridiagonal solver lead to an efficient implementation of the free boundary value problem. Extensive numerical experiments show that the new finite difference algorithm converges rapidly and numerical solutions with good accuracy are obtained. Comparisons with some recently proposed methods for the American options problem are carried out to show the advantage of our numerical method.

  5. A high-order statistical tensor based algorithm for anomaly detection in hyperspectral imagery.

    PubMed

    Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao

    2014-01-01

    Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm. PMID:25366706

  6. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  7. High-resolution algorithms for the Navier-Stokes equations for generalized discretizations

    NASA Astrophysics Data System (ADS)

    Mitchell, Curtis Randall

    Accurate finite volume solution algorithms for the two dimensional Navier Stokes equations and the three dimensional Euler equations for both structured and unstructured grid topologies are presented. Results for two dimensional quadrilateral and triangular elements and three dimensional tetrahedral elements will be provided. Fundamental to the solution algorithm is a technique for generating multidimensional polynomials which model the spatial variation of the flow variables. Cell averaged data is used to reconstruct pointwise distributions of the dependent variables. The reconstruction errors are evaluated on triangular meshes. The implementation of the algorithm is unique in that three reconstructions are performed for each cell face in the domain. Two of the reconstructions are used to evaluate the inviscid fluxes and correspond to the right and left interface states needed for the solution of a Riemann problem. The third reconstruction is used to evaluate the viscous fluxes. The gradient terms that appear in the viscous fluxes are formed by simply differentiating the polynomial. By selecting the appropriate cell control volumes, centered, upwind and upwind-biased stencils are possible. Numerical calculations in two dimensions include solutions to elliptic boundary value problems, Ringlebs' flow, an inviscid shock reflection, a flat plate boundary layer, and a shock induced separation over a flat plate. Three dimensional results include the ONERA M6 wing. All of the unstructured grids were generated using an advancing front mesh generation procedure. Modifications to the three dimensional grid generator were necessary to discretize the surface grids for bodies with high curvature. In addition, mesh refinement algorithms were implemented to improve the surface grid integrity. Examples include a Glasair fuselage, High Speed Civil Transport, and the ONERA M6 wing. The role of reconstruction as applied to adaptive remeshing is discussed and a new first order error

  8. High concordance of gene expression profiling-correlated immunohistochemistry algorithms in diffuse large B-cell lymphoma, not otherwise specified.

    PubMed

    Hwang, Hee Sang; Park, Chan-Sik; Yoon, Dok Hyun; Suh, Cheolwon; Huh, Jooryung

    2014-08-01

    Diffuse large B-cell lymphoma (DLBCL) is classified into prognostically distinct germinal center B-cell (GCB) and activated B-cell subtypes by gene expression profiling (GEP). Recent reports suggest the role of GEP subtypes in targeted therapy. Immunohistochemistry (IHC) algorithms have been proposed as surrogates of GEP, but their utility remains controversial. Using microarray, we examined the concordance of 4 GEP-correlated and 2 non-GEP-correlated IHC algorithms in 381 DLBCLs, not otherwise specified. Subtypes and variants of DLBCL were excluded to minimize the possible confounding effect on prognosis and phenotype. Survival was analyzed in 138 cyclophosphamide, adriamycin, vincristine, and prednisone (CHOP)-treated and 147 rituximab plus CHOP (R-CHOP)-treated patients. Of the GEP-correlated algorithms, high concordance was observed among Hans, Choi, and Visco-Young algorithms (total concordance, 87.1%; κ score: 0.726 to 0.889), whereas Tally algorithm exhibited slightly lower concordance (total concordance 77.4%; κ score: 0.502 to 0.643). Two non-GEP-correlated algorithms (Muris and Nyman) exhibited poor concordance. Compared with the Western data, incidence of the non-GCB subtype was higher in all algorithms. Univariate analysis showed prognostic significance for Hans, Choi, and Visco-Young algorithms and BCL6, GCET1, LMO2, and BCL2 in CHOP-treated patients. On multivariate analysis, Hans algorithm retained its prognostic significance. By contrast, neither the algorithms nor individual antigens predicted survival in R-CHOP treatment. The high concordance among GEP-correlated algorithms suggests their usefulness as reliable discriminators of molecular subtype in DLBCL, not otherwise specified. Our study also indicates that prognostic significance of IHC algorithms may be limited in R-CHOP-treated Asian patients because of the predominance of the non-GCB type. PMID:24705314

  9. The performance of flux-split algorithms in high-speed viscous flows

    NASA Astrophysics Data System (ADS)

    Gaitonde, Datta; Shang, J. S.

    1992-01-01

    The algorithms are investigated in terms of their behavior in 2D perfect gas laminar viscous flows with attention given to the van Leer, Modified Steger-Warming (MSW), and Roe methods. The techniques are studied in the context of examples including a blunt flow at Mach 16, a Mach-14 flow past a 24-deg compression corner, and a Mach-8 type-IV shock-shock interaction. Existing experimental values are compared to the results of the corresponding grid-resolution studies. The algorithms indicate similar surface pressures for the blunt-body and corner flows, but the van Leer approach predicts a very high heat-transfer value. Anomalous carbuncle solutions appear in the blunt-body solutions for the MSW and Roe techniques. Accurate predictions of the separated flow regions are found with the MSW method, the Roe scheme, and the finer grids of the van Leer algorithm, but only the MSW scheme predicts an oscillatory supersonic jet structure in the limit cycle.

  10. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.

    PubMed

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. PMID:26965325

  11. Spectral deblurring: an algorithm for high-resolution, hybrid spectral CT

    NASA Astrophysics Data System (ADS)

    Clark, D. P.; Badea, C. T.

    2015-03-01

    We are developing a hybrid, dual-source micro-CT system based on the combined use of an energy integrating (EID) x-ray detector and a photon counting x-ray detector (PCXD). Due to their superior spectral resolving power, PCXDs have the potential to reduce radiation dose and to enable functional and molecular imaging with CT. In most current PCXDs, however, spatial resolution and field of view are limited by hardware development and charge sharing effects. To address these problems, we propose spectral deblurring—a relatively simple algorithm for increasing the spatial resolution of hybrid, spectral CT data. At the heart of the algorithm is the assumption that the underlying CT data is piecewise constant, enabling robust recovery in the presence of noise and spatial blur by enforcing gradient sparsity. After describing the proposed algorithm, we summarize simulation experiments which assess the trade-offs between spatial resolution, contrast, and material decomposition accuracy given realistic levels of noise. When the spatial resolution between imaging chains has a ratio of 5:1, spectral deblurring results in a 52% increase in the material decomposition accuracy of iodine, gadolinium, barium, and water vs. linear interpolation. For a ratio of 10:1, a realistic representation of our hybrid imaging system, a 52% improvement was also seen. Overall, we conclude that the performance breaks down around high frequency and low contrast structures. Following the simulation experiments, we apply the algorithm to ex vivo data acquired in a mouse injected with an iodinated contrast agent and surrounded by vials of iodine, gadolinium, barium, and water.

  12. Development and Characterization of High-Efficiency, High-Specific Impulse Xenon Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Hofer, Richard R.; Jacobson, David (Technical Monitor)

    2004-01-01

    This dissertation presents research aimed at extending the efficient operation of 1600 s specific impulse Hall thruster technology to the 2000 to 3000 s range. Motivated by previous industry efforts and mission studies, the aim of this research was to develop and characterize xenon Hall thrusters capable of both high-specific impulse and high-efficiency operation. During the development phase, the laboratory-model NASA 173M Hall thrusters were designed and their performance and plasma characteristics were evaluated. Experiments with the NASA-173M version 1 (v1) validated the plasma lens magnetic field design. Experiments with the NASA 173M version 2 (v2) showed there was a minimum current density and optimum magnetic field topography at which efficiency monotonically increased with voltage. Comparison of the thrusters showed that efficiency can be optimized for specific impulse by varying the plasma lens. During the characterization phase, additional plasma properties of the NASA 173Mv2 were measured and a performance model was derived. Results from the model and experimental data showed how efficient operation at high-specific impulse was enabled through regulation of the electron current with the magnetic field. The electron Hall parameter was approximately constant with voltage, which confirmed efficient operation can be realized only over a limited range of Hall parameters.

  13. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    SciTech Connect

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  14. Differential evolution algorithm for nonlinear inversion of high-frequency Rayleigh wave dispersion curves

    NASA Astrophysics Data System (ADS)

    Song, Xianhai; Li, Lei; Zhang, Xueqiang; Huang, Jianquan; Shi, Xinchun; Jin, Si; Bai, Yiming

    2014-10-01

    In recent years, Rayleigh waves are gaining popularity to obtain near-surface shear (S)-wave velocity profiles. However, inversion of Rayleigh wave dispersion curves is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this study, we proposed and tested a new Rayleigh wave dispersion curve inversion scheme based on differential evolution (DE) algorithm. DE is a novel stochastic search approach that possesses several attractive advantages: (1) Capable of handling non-differentiable, non-linear and multimodal objective functions because of its stochastic search strategy; (2) Parallelizability to cope with computation intensive objective functions without being time consuming by using a vector population where the stochastic perturbation of the population vectors can be done independently; (3) Ease of use, i.e. few control variables to steer the minimization/maximization by DE's self-organizing scheme; and (4) Good convergence properties. The proposed inverse procedure was applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and stability of DE, we firstly inverted four noise-free and four noisy synthetic data sets. Secondly, we investigated effects of the number of layers on DE algorithm and made an uncertainty appraisal analysis by DE algorithm. Thirdly, we made a comparative analysis with genetic algorithms (GA) by a synthetic data set to further investigate the performance of the proposed inverse procedure. Finally, we inverted a real-world example from a waste disposal site in NE Italy to examine the applicability of DE on Rayleigh wave dispersion curves. Furthermore, we compared the performance of the proposed approach to that of GA to further evaluate scores of the inverse procedure described here. Results from both synthetic and actual field data demonstrate that differential evolution algorithm applied

  15. Enhanced algorithm based on persistent scatterer interferometry for the estimation of high-rate land subsidence

    NASA Astrophysics Data System (ADS)

    Sadeghi, Zahra; Valadan Zoej, Mohammad Javad; Dehghani, Maryam; Chang, Ni-Bin

    2012-01-01

    Persistent scatterer interferometry (PSI) techniques using amplitude analysis and considering a temporal deformation model for PS pixel selection are unable to identify PS pixels in rural areas lacking human-made structures. In contrast, high rates of land subsidence lead to significant phase-unwrapping errors in a recently developed PSI algorithm (StaMPS) that applies phase stability and amplitude analysis to select the PS pixels in rural areas. The objective of this paper is to present an enhanced algorithm based on PSI to estimate the deformation rate in rural areas undergoing high and nearly constant rates of deformation. The proposed approach integrates the strengths of all of the existing PSI algorithms in PS pixel selection and phase unwrapping. PS pixels are first selected based on the amplitude information and phase-stability estimation as performed in StaMPS. The phase-unwrapping step, including the deformation rate and phase-ambiguity estimation, is then performed using least-squares ambiguity decorrelation adjustment (LAMBDA). The atmospheric phase screen (APS) and nonlinear deformation contribution to the phase are estimated by applying a high-pass temporal filter to the residuals derived from the LAMBDA method. The final deformation rate and the ambiguity parameter are re-estimated after subtracting the APS and the nonlinear deformation from that of the initial phase. The proposed method is applied to 22 ENVISAT ASAR images of southwestern Tehran basin captured between 2003 and 2008. A quantitative comparison with the results obtained with leveling and GPS measurements demonstrates the significant improvement of the PSI technique.

  16. Age specific fecundity of Lygus hesperus in high, fluctuating temperatures.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We have simulated hourly temperatures to examine Lygus response to hot summers in the San Joaquin Valley. Constant temperature of 33C quickly killed Lygus and SJV temperatures routinely surpass this level. Average hourly temperatures were tested for the months May, July, and September. Age specific ...

  17. Specification of High Activity Gamma-Ray Sources.

    ERIC Educational Resources Information Center

    International Commission on Radiation Units and Measurements, Washington, DC.

    The report is concerned with making recommendations for the specifications of gamma ray sources, which relate to the quantity of radioactive material and the radiation emitted. Primary consideration is given to sources in teletherapy and to a lesser extent those used in industrial radiography and in irradiation units used in industry and research.…

  18. An Efficient Algorithm for Some Highly Nonlinear Fractional PDEs in Mathematical Physics

    PubMed Central

    Ahmad, Jamshad; Mohyud-Din, Syed Tauseef

    2014-01-01

    In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature. PMID:25525804

  19. Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy

    SciTech Connect

    Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc; Létourneau, Mélanie; Fenster, Aaron; Pouliot, Jean

    2013-11-15

    Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then be generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the

  20. Optimization of the K-means algorithm for the solution of high dimensional instances

    NASA Astrophysics Data System (ADS)

    Pérez, Joaquín; Pazos, Rodolfo; Olivares, Víctor; Hidalgo, Miguel; Ruiz, Jorge; Martínez, Alicia; Almanza, Nelva; González, Moisés

    2016-06-01

    This paper addresses the problem of clustering instances with a high number of dimensions. In particular, a new heuristic for reducing the complexity of the K-means algorithm is proposed. Traditionally, there are two approaches that deal with the clustering of instances with high dimensionality. The first executes a preprocessing step to remove those attributes of limited importance. The second, called divide and conquer, creates subsets that are clustered separately and later their results are integrated through post-processing. In contrast, this paper proposes a new solution which consists of the reduction of distance calculations from the objects to the centroids at the classification step. This heuristic is derived from the visual observation of the clustering process of K-means, in which it was found that the objects can only migrate to adjacent clusters without crossing distant clusters. Therefore, this heuristic can significantly reduce the number of distance calculations from an object to the centroids of the potential clusters that it may be classified to. To validate the proposed heuristic, it was designed a set of experiments with synthetic and high dimensional instances. One of the most notable results was obtained for an instance of 25,000 objects and 200 dimensions, where its execution time was reduced up to 96.5% and the quality of the solution decreased by only 0.24% when compared to the K-means algorithm.

  1. A truncated Levenberg-Marquardt algorithm for the calibration of highly parameterized nonlinear models

    SciTech Connect

    Finsterle, S.; Kowalsky, M.B.

    2010-10-15

    We propose a modification to the Levenberg-Marquardt minimization algorithm for a more robust and more efficient calibration of highly parameterized, strongly nonlinear models of multiphase flow through porous media. The new method combines the advantages of truncated singular value decomposition with those of the classical Levenberg-Marquardt algorithm, thus enabling a more robust solution of underdetermined inverse problems with complex relations between the parameters to be estimated and the observable state variables used for calibration. The truncation limit separating the solution space from the calibration null space is re-evaluated during the iterative calibration process. In between these re-evaluations, fewer forward simulations are required, compared to the standard approach, to calculate the approximate sensitivity matrix. Truncated singular values are used to calculate the Levenberg-Marquardt parameter updates, ensuring that safe small steps along the steepest-descent direction are taken for highly correlated parameters of low sensitivity, whereas efficient quasi-Gauss-Newton steps are taken for independent parameters with high impact. The performance of the proposed scheme is demonstrated for a synthetic data set representing infiltration into a partially saturated, heterogeneous soil, where hydrogeological, petrophysical, and geostatistical parameters are estimated based on the joint inversion of hydrological and geophysical data.

  2. Surface contribution to high-order aberrations using the Aldis therem and Andersen's algorithms

    NASA Astrophysics Data System (ADS)

    Ortiz-Estardante, A.; Cornejo-Rodriguez, Alejandro

    1990-07-01

    Formulae and computer programs were developed for surface contributions to high order aberrations coefficients using the Aldis theorem and Andersen algor ithms for a symmetr ical optical system. 2. THEORY Using the algorithms developed by T. B. Andersent which allow to calculate the high order aberrations coefficients of an optical system. We were able to obtain a set of equations for the contributions of each surface of a centered optical system to such aberration coefficiets by using the equations of Andersen and the so called Aldis theorem 3. COMPUTER PROGRAMS AND EXAMPLES. The study for the case of an object at infinite has been completed and more recently the object to finite distance case has been also finished . The equations have been properly programed for the two above mentioned situations . Some typical designs of optical systems will be presented and some advantages and disadvantages of the developed formulae and method will be discussed. 4. CONCLUSIONS The algorithm developed by Anderson has a compact notation and structure which is suitable for computers. Using those results obtained by Anderson together with the Aldis theorem a set of equations were derived and programmed for the surface contributions of a centered optical system to high order aberrations. 5. REFERENCES 1. T . B. Andersen App 1. Opt. 3800 (1980) 2. A. Cox A system of Optical Design Focal Press 1964 18 / SPIE

  3. Algorithms for Low-Cost High Accuracy Geomagnetic Measurements in LEO

    NASA Astrophysics Data System (ADS)

    Beach, T. L.; Zesta, E.; Allen, L.; Chepko, A.; Bonalsky, T.; Wendel, D. E.; Clavier, O.

    2013-12-01

    Geomagnetic field measurements are a fundamental, key parameter measurement for any space weather application, particularly for tracking the electromagnetic energy input in the Ionosphere-Thermosphere system and for high latitude dynamics governed by the large-scale field-aligned currents. The full characterization of the Magnetosphere-Ionosphere-Thermosphere coupled system necessitates measurements with higher spatial/temporal resolution and from multiple locations simultaneously. This becomes extremely challenging in the current state of shrinking budgets. Traditionally, including a science-grade magnetometer in a mission necessitates very costly integration and design (sensor on long boom) and imposes magnetic cleanliness restrictions on all components of the bus and payload. This work presents an innovative algorithm approach that enables high quality magnetic field measurements by one or more high-quality magnetometers mounted on the spacecraft without booms. The algorithm estimates the background field using multiple magnetometers and current telemetry on board a spacecraft. Results of a hardware-in-the-loop simulation showed an order of magnitude reduction in the magnetic effects of spacecraft onboard time-varying currents--from 300 nT to an average residual of 15 nT.

  4. Real-time high-resolution downsampling algorithm on many-core processor for spatially scalable video coding

    NASA Astrophysics Data System (ADS)

    Buhari, Adamu Muhammad; Ling, Huo-Chong; Baskaran, Vishnu Monn; Wong, KokSheik

    2015-01-01

    The progression toward spatially scalable video coding (SVC) solutions for ubiquitous endpoint systems introduces challenges to sustain real-time frame rates in downsampling high-resolution videos into multiple layers. In addressing these challenges, we put forward a hardware accelerated downsampling algorithm on a parallel computing platform. First, we investigate the principal architecture of a serial downsampling algorithm in the Joint-Scalable-Video-Model reference software to identify the performance limitations for spatially SVC. Then, a parallel multicore-based downsampling algorithm is studied as a benchmark. Experimental results for this algorithm using an 8-core processor exhibit performance speedup of 5.25× against the serial algorithm in downsampling a quantum extended graphics array at 1536p video resolution into three lower resolution layers (i.e., Full-HD at 1080p, HD at 720p, and Quarter-HD at 540p). However, the achieved speedup here does not translate into the minimum required frame rate of 15 frames per second (fps) for real-time video processing. To improve the speedup, a many-core based downsampling algorithm using the compute unified device architecture parallel computing platform is proposed. The proposed algorithm increases the performance speedup to 26.14× against the serial algorithm. Crucially, the proposed algorithm exceeds the target frame rate of 15 fps, which in turn is advantageous to the overall performance of the video encoding process.

  5. Coaxial plasma thrusters for high specific impulse propulsion

    NASA Technical Reports Server (NTRS)

    Schoenberg, Kurt F.; Gerwin, Richard A.; Barnes, Cris W.; Henins, Ivars; Mayo, Robert; Moses, Ronald, Jr.; Scarberry, Richard; Wurden, Glen

    1991-01-01

    A fundamental basis for coaxial plasma thruster performance is presented and the steady-state, ideal MHD properties of a coaxial thruster using an annular magnetic nozzle are discussed. Formulas for power usage, thrust, mass flow rate, and specific impulse are acquired and employed to assess thruster performance. The performance estimates are compared with the observed properties of an unoptimized coaxial plasma gun. These comparisons support the hypothesis that ideal MHD has an important role in coaxial plasma thruster dynamics.

  6. High Spectral Resolution MODIS Algorithms for Ocean Chlorophyll in Case II Waters

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    2004-01-01

    The Case 2 chlorophyll a algorithm is based on a semi-analytical, bio-optical model of remote sensing reflectance, R(sub rs)(lambda), where R(sub rs)(lambda) is defined as the water-leaving radiance, L(sub w)(lambda), divided by the downwelling irradiance just above the sea surface, E(sub d)(lambda,0(+)). The R(sub rs)(lambda) model (Section 3) has two free variables, the absorption coefficient due to phytoplankton at 675 nm, a(sub phi)(675), and the absorption coefficient due to colored dissolved organic matter (CDOM) or gelbstoff at 400 nm, a(sub g)(400). The R(rs) model has several parameters that are fixed or can be specified based on the region and season of the MODIS scene. These control the spectral shapes of the optical constituents of the model. R(sub rs)(lambda(sub i)) values from the MODIS data processing system are placed into the model, the model is inverted, and a(sub phi)(675), a(sub g)(400) (MOD24), and chlorophyll a (MOD21, Chlor_a_3) are computed. Algorithm development is initially focused on tropical, subtropical, and summer temperate environments, and the model is parameterized in Section 4 for three different bio-optical domains: (1) high ratios of photoprotective pigments to chlorophyll and low self-shading, which for brevity, we designate as 'unpackaged'; (2) low ratios and high self-shading, which we designate as 'packaged'; and (3) a transitional or global-average type. These domains can be identified from space by comparing sea-surface temperature to nitrogen-depletion temperatures for each domain (Section 5). Algorithm errors of more than 45% are reduced to errors of less than 30% with this approach, with the greatest effect occurring at the eastern and polar boundaries of the basins. Section 6 provides an expansion of bio-optical domains into high-latitude waters. The 'fully packaged' pigment domain is introduced in this section along with a revised strategy for implementing these variable packaging domains. Chlor_a_3 values derived semi

  7. A High-Speed Pipelined Degree-Computationless Modified Euclidean Algorithm Architecture for Reed-Solomon Decoders

    NASA Astrophysics Data System (ADS)

    Lee, Seungbeom; Lee, Hanho

    This paper presents a novel high-speed low-complexity pipelined degree-computationless modified Euclidean (pDCME) algorithm architecture for high-speed RS decoders. The pDCME algorithm allows elimination of the degree-computation so as to reduce hardware complexity and obtain high-speed processing. A high-speed RS decoder based on the pDCME algorithm has been designed and implemented with 0.13-μm CMOS standard cell technology in a supply voltage of 1.1V. The proposed RS decoder operates at a clock frequency of 660MHz and has a throughput of 5.3Gb/s. The proposed architecture requires approximately 15% fewer gate counts and a simpler control logic than architectures based on the popular modified Euclidean algorithm.

  8. Research to define algorithms appropriate to a high-data-rate laser-wavelength-measurement instrument

    SciTech Connect

    Byer, R.L.

    1982-04-01

    Progress made over a four year period on a computer controlled laser wavelength meter is summarized. The optical system of the laser wavelength meter consists of a series of Fabry-Perot interferometers and one high resolution confocal interferometer, preceeded by a one-half meter grating spectrometer. The interferometrically generated fringes are imaged on linear diode arrays, read into computer memory and processed by an efficient, noise resistant, algorithm which calculates the wavelength. The algorithm fitting routine generates high accuracy fringe fits at a rate of 5Hz with the present LSI 11/2 processor. A 10 Hz fitting rate is expected with the LSI 11/23 processor. Fringes are fit with an rms error of less than +- 0.01. The wavelength measurement accuracy is thus one hundredth of the free spectral range of the interferometers, which at present are 10 cm/sup -1/, 1 cm/sup -1/, 0.1 cm/sup -1/ and 0.01 cm/sup -1/. Thus wavelengths can be measured to +- .0001 cm/sup -1/ or +- 3 MHz. The wavelength meter interferometers are calibrated by a stabilized HeNe laser source with a long term stability of better than +- 1 MHz. Fringes have been fit for over 10/sup 6/ cycles to demonstrate the stability of the algorithm. When the hardware is transferred from the present bread-borad mounting to final mounting, the wavelength meter will provide an accurate and versatile approach for measuring and displaying cw, pulsed, single mode or multi mode laser spectra.

  9. The Optimized Block-Regression Fusion Algorithm for Pansharpening of Very High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, J. X.; Yang, J. H.; Reinartz, P.

    2016-06-01

    Pan-sharpening of very high resolution remotely sensed imagery need enhancing spatial details while preserving spectral characteristics, and adjusting the sharpened results to realize the different emphases between the two abilities. In order to meet the requirements, this paper is aimed at providing an innovative solution. The block-regression-based algorithm (BR), which was previously presented for fusion of SAR and optical imagery, is firstly applied to sharpen the very high resolution satellite imagery, and the important parameter for adjustment of fusion result, i.e., block size, is optimized according to the two experiments for Worldview-2 and QuickBird datasets in which the optimal block size is selected through the quantitative comparison of the fusion results of different block sizes. Compared to five fusion algorithms (i.e., PC, CN, AWT, Ehlers, BDF) in fusion effects by means of quantitative analysis, BR is reliable for different data sources and can maximize enhancement of spatial details at the expense of a minimum spectral distortion.

  10. Monte Carlo cluster algorithm for fluid phase transitions in highly size-asymmetrical binary mixtures

    NASA Astrophysics Data System (ADS)

    Ashton, Douglas J.; Liu, Jiwen; Luijten, Erik; Wilding, Nigel B.

    2010-11-01

    Highly size-asymmetrical fluid mixtures arise in a variety of physical contexts, notably in suspensions of colloidal particles to which much smaller particles have been added in the form of polymers or nanoparticles. Conventional schemes for simulating models of such systems are hamstrung by the difficulty of relaxing the large species in the presence of the small one. Here we describe how the rejection-free geometrical cluster algorithm of Liu and Luijten [J. Liu and E. Luijten, Phys. Rev. Lett. 92, 035504 (2004)] can be embedded within a restricted Gibbs ensemble to facilitate efficient and accurate studies of fluid phase behavior of highly size-asymmetrical mixtures. After providing a detailed description of the algorithm, we summarize the bespoke analysis techniques of [Ashton et al., J. Chem. Phys. 132, 074111 (2010)] that permit accurate estimates of coexisting densities and critical-point parameters. We apply our methods to study the liquid-vapor phase diagram of a particular mixture of Lennard-Jones particles having a 10:1 size ratio. As the reservoir volume fraction of small particles is increased in the range of 0%-5%, the critical temperature decreases by approximately 50%, while the critical density drops by some 30%. These trends imply that in our system, adding small particles decreases the net attraction between large particles, a situation that contrasts with hard-sphere mixtures where an attractive depletion force occurs.

  11. Defining and Evaluating Classification Algorithm for High-Dimensional Data Based on Latent Topics

    PubMed Central

    Luo, Le; Li, Li

    2014-01-01

    Automatic text categorization is one of the key techniques in information retrieval and the data mining field. The classification is usually time-consuming when the training dataset is large and high-dimensional. Many methods have been proposed to solve this problem, but few can achieve satisfactory efficiency. In this paper, we present a method which combines the Latent Dirichlet Allocation (LDA) algorithm and the Support Vector Machine (SVM). LDA is first used to generate reduced dimensional representation of topics as feature in VSM. It is able to reduce features dramatically but keeps the necessary semantic information. The Support Vector Machine (SVM) is then employed to classify the data based on the generated features. We evaluate the algorithm on 20 Newsgroups and Reuters-21578 datasets, respectively. The experimental results show that the classification based on our proposed LDA+SVM model achieves high performance in terms of precision, recall and F1 measure. Further, it can achieve this within a much shorter time-frame. Our process improves greatly upon the previous work in this field and displays strong potential to achieve a streamlined classification process for a wide range of applications. PMID:24416136

  12. Crystal Symmetry Algorithms in a High-Throughput Framework for Materials

    NASA Astrophysics Data System (ADS)

    Taylor, Richard

    The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.

  13. Effects of high count rate and gain shift on isotope identification algorithms

    SciTech Connect

    Robinson, Sean M.; Kiff, Scott D.; Ashbaker, Eric D.; Flumerfelt, Eric L.; Salvitti, Matthew

    2009-11-01

    Spectroscopic gamma-ray detectors are used for many research, industrial, and homeland- security applications. Thallium-doped sodium iodide, (NaI(Tl)), scintillation crystals coupled to photomultiplier tubes provide medium-resolution spectral data about the surrounding environment. NaI(Tl)-based detectors, paired with spectral identification algorithms, are often effective for identifying gamma-ray sources by isotope. However, intrinsic limitations for NaI(Tl) systems exist, including gain shifts and spectral marring (e.g., loss of resolution and count-rate saturation) at high count rates. These effects are hardware dependent and have strong effects on the radioisotopic identification capability of NaI(Tl)-based systems. In this work, the effects of high count rate on the response of isotope-identification algorithms are explored. It is shown that a small gain shift of a few tens of keV is sufficient to disturb identification. The onset of this and other spectral effects is estimated for NaI(Tl) crystals, and a mechanism for mitigating these effects by estimating and correcting for them is implemented and evaluated.

  14. Effects of High Count Rate and Gain Shift on Isotope Identification Algorithms

    SciTech Connect

    Robinson, Sean M.; Kiff, Scott D.; Ashbaker, Eric D.; Bender, Sarah E.; Flumerfelt, Eric L.; Salvitti, Matthew; Borgardt, James D.; Woodring, Mitchell L.

    2007-12-31

    Spectroscopic gamma-ray detectors are used for many research applications, as well as Homeland Security screening applications. Sodium iodide (NaI) scintillator crystals coupled with photomultiplier tubes (PMTs) provide medium-resolution spectral data about the surrounding environment. NaI based detectors, paired with spectral identification algorithms, are often effective in identifying sources of interest by isotope. However, intrinsic limitations exist for NaI systems because of gain shifts and spectral marring (e.g., loss of resolution and count-rate saturation) at high count rates. These effects are hardware dependent, and have strong effects on the radioisotopic identification capability of these systems. In this work, the effects of high count rate on the capability of isotope identification algorithms are explored. It is shown that a small gain shift of a few tens of keV is sufficient to disturb identification. The onset of this and other spectral effects are estimated for several systems., and a mechanism for mitigating these effects by estimating and correcting for them is implemented and evaluated.

  15. L2-Boosting algorithm applied to high-dimensional problems in genomic selection.

    PubMed

    González-Recio, Oscar; Weigel, Kent A; Gianola, Daniel; Naya, Hugo; Rosa, Guilherme J M

    2010-06-01

    The L(2)-Boosting algorithm is one of the most promising machine-learning techniques that has appeared in recent decades. It may be applied to high-dimensional problems such as whole-genome studies, and it is relatively simple from a computational point of view. In this study, we used this algorithm in a genomic selection context to make predictions of yet to be observed outcomes. Two data sets were used: (1) productive lifetime predicted transmitting abilities from 4702 Holstein sires genotyped for 32 611 single nucleotide polymorphisms (SNPs) derived from the Illumina BovineSNP50 BeadChip, and (2) progeny averages of food conversion rate, pre-corrected by environmental and mate effects, in 394 broilers genotyped for 3481 SNPs. Each of these data sets was split into training and testing sets, the latter comprising dairy or broiler sires whose ancestors were in the training set. Two weak learners, ordinary least squares (OLS) and non-parametric (NP) regression were used for the L2-Boosting algorithm, to provide a stringent evaluation of the procedure. This algorithm was compared with BL [Bayesian LASSO (least absolute shrinkage and selection operator)] and BayesA regression. Learning tasks were carried out in the training set, whereas validation of the models was performed in the testing set. Pearson correlations between predicted and observed responses in the dairy cattle (broiler) data set were 0.65 (0.33), 0.53 (0.37), 0.66 (0.26) and 0.63 (0.27) for OLS-Boosting, NP-Boosting, BL and BayesA, respectively. The smallest bias and mean-squared errors (MSEs) were obtained with OLS-Boosting in both the dairy cattle (0.08 and 1.08, respectively) and broiler (-0.011 and 0.006) data sets, respectively. In the dairy cattle data set, the BL was more accurate (bias=0.10 and MSE=1.10) than BayesA (bias=1.26 and MSE=2.81), whereas no differences between these two methods were found in the broiler data set. L2-Boosting with a suitable learner was found to be a competitive

  16. A protein multiplex microarray substrate with high sensitivity and specificity

    PubMed Central

    Fici, Dolores A.; McCormick, William; Brown, David W.; Herrmann, John E.; Kumar, Vikram; Awdeh, Zuheir L.

    2010-01-01

    The problems that have been associated with protein multiplex microarray immunoassay substrates and existing technology platforms include: binding, sensitivity, a low signal to noise ratio, target immobilization and the optimal simultaneous detection of diverse protein targets. Current commercial substrates for planar multiplex microarrays rely on protein attachment chemistries that range from covalent attachment to affinity ligand capture, to simple adsorption. In this pilot study, experimental performance parameters for direct monoclonal mouse IgG detection were compared for available two and three dimensional slide surface coatings with a new colloidal nitrocellulose substrate. New technology multiplex microarrays were also developed and evaluated for the detection of pathogen specific antibodies in human serum and the direct detection of enteric viral antigens. Data supports the nitrocellulose colloid as an effective reagent with the capacity to immobilize sufficient diverse protein target quantities for increased specificory signal without compromising authentic protein structure. The nitrocellulose colloid reagent is compatible with the array spotters and scanners routinely used for microarray preparation and processing. More importantly, as an alternate to fluorescence, colorimetric chemistries may be used for specific and sensitive protein target detection. The advantages of the nitrocellulose colloid platform indicate that this technology may be a valuable tool for the further development and expansion of multiplex microarray immunoassays in both the clinical and research laborat environment. PMID:20974147

  17. Simulation of Trajectories for High Specific Impulse Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, Tara; Adams, Robert B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Difficulties in approximating flight times and deliverable masses for continuous thrust propulsion systems have complicated comparison and evaluation of proposed propulsion concepts. These continuous thrust propulsion systems are of interest to many groups, not the least of which are the electric propulsion and fusion communities. Several charts plotting the results of well-known trajectory simulation codes were developed and are contained in this paper. These charts illustrate the dependence of time of flight and payload ratio on jet power, initial mass, specific impulse and specific power. These charts are intended to be a tool by which people in the propulsion community can explore the possibilities of their propulsion system concepts. Trajectories were simulated using the tools VARITOP and IPOST. VARITOP is a well known trajectory optimization code that involves numerical integration based on calculus of variations. IPOST has several methods of trajectory simulation; the one used in this paper is Cowell's method for full integration of the equations of motion. The analytical method derived in the companion paper was also used to simulate the trajectory. The accuracy of this method is discussed in the paper.

  18. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  19. Some algorithmic issues in full-waveform inversion of teleseismic data for high-resolution lithospheric imaging

    NASA Astrophysics Data System (ADS)

    Monteiller, Vadim; Beller, Stephen; Nolet, Guust; Operto, Stephane; Brossier, Romain; Métivier, Ludovic; Paul, Anne; Virieux, Jean

    2014-05-01

    The current development of dense seismic arrays and high performance computing make feasible today application of full-waveform inversion (FWI) on teleseismic data for high-resolution lithospheric imaging. In teleseismic configuration, the source is to first-order a plane-wave that impinges the base of the lithospheric target located below the receiver array. In this setting, FWI aims to exploit not only the forward-scattered waves propagating up to the receiver but also second-order arrivals that are back-scattered from the free-surface and the reflectors before their recordings on the surface. FWI requires using full-wave methods modeling such as finite-difference or finite-element methods. In this framework, careful design of FWI algorithms is topical to mitigate as much as possible the computational burden of multi-source full-waveform modeling. In this presentation, we review some key specifications that might be considered for versatile FWI implementation. An abstraction level between the forward and inverse problems that allows for the interfacing of different modeling engines with the inversion. This requires the subsurface meshings that are used to perform seismic modeling and update the subsurface models during inversion to be fully independent through some back-and-forth projection processes. The subsurface parameterization should be carefully chosen during multi-parameter FWI as it controls the trade-off between parameters of different nature. A versatile FWI algorithm should be designed such that different subsurface parameterizations for the model update can be easily implemented. The gradient of the misfit function should be computed as easily as possible with the adjoint-state method in parallel environment. This first requires the gradient to be independent to the discretization method that is used to perform seismic modeling. Second, the incident and adjoint wavefields should be computed with the same numerical scheme, even if the forward problem

  20. Application of artificial bee colony (ABC) algorithm in search of optimal release of Aswan High Dam

    NASA Astrophysics Data System (ADS)

    Hossain, Md S.; El-shafie, A.

    2013-04-01

    The paper presents a study on developing an optimum reservoir release policy by using ABC algorithm. The decision maker of a reservoir system always needs a guideline to operate the reservoir in an optimal way. Release curves have developed for high, medium and low inflow category that can answer how much water need to be release for a month by observing the reservoir level (storage condition). The Aswan high dam of Egypt has considered as the case study. 18 years of historical inflow data has used for simulation purpose and the general system performance measuring indices has measured. The application procedure and problem formulation of ABC is very simple and can be used in optimizing reservoir system. After using the actual historical inflow, the release policy succeeded in meeting demand for about 98% of total time period.

  1. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  2. On high-order denoising models and fast algorithms for vector-valued images.

    PubMed

    Brito-Loeza, Carlos; Chen, Ke

    2010-06-01

    Variational techniques for gray-scale image denoising have been deeply investigated for many years; however, little research has been done for the vector-valued denoising case and the very few existent works are all based on total-variation regularization. It is known that total-variation models for denoising gray-scaled images suffer from staircasing effect and there is no reason to suggest this effect is not transported into the vector-valued models. High-order models, on the contrary, do not present staircasing. In this paper, we introduce three high-order and curvature-based denoising models for vector-valued images. Their properties are analyzed and a fast multigrid algorithm for the numerical solution is provided. AMS subject classifications: 68U10, 65F10, 65K10. PMID:20172828

  3. Architecture for High Speed Learning of Neural Network using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Masaya; Terai, Hidekazu

    This paper discusses the architecture for high speed learning of Neural Network (NN) using Genetic Algorithm (GA). The proposed architecture prevents local minimum by using the GA characteristic of holding several individual populations for a population-based search and achieves high speed processing adopting dedicated hardware. To keep general purpose equal software processing, the proposed architecture can be flexible genetic operations on GA and is introduced both Sigmoid function and Heaviside function on NN. Furthermore, the proposed architecture is not optimized only the pipeline at evaluation phase on NN, but also optimized hierarchic pipelines on the whole at evolutionary phase. We have done the simulation, verification and logic synthesis using library of 0.35μm CMOS standard cell. Simulation results evaluating the proposed architecture show to achieve 22 times speed on average compared with software processing.

  4. High effective algorithm of the detection and identification of substance using the noisy reflected THz pulse

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.; Tikhomirov, Vasily V.

    2015-08-01

    Principal limitations of the standard THz-TDS method for the detection and identification are demonstrated under real conditions (at long distance of about 3.5 m and at a high relative humidity more than 50%) using neutral substances thick paper bag, paper napkins and chocolate. We show also that the THz-TDS method detects spectral features of dangerous substances even if the THz signals were measured in laboratory conditions (at distance 30-40 cm from the receiver and at a low relative humidity less than 2%); silicon-based semiconductors were used as the samples. However, the integral correlation criteria, based on SDA method, allows us to detect the absence of dangerous substances in the neutral substances. The discussed algorithm shows high probability of the substance identification and a reliability of realization in practice, especially for security applications and non-destructive testing.

  5. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGESBeta

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-08-19

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  6. Enhanced ATR algorithm for high resolution multi-band sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernández, Manuel

    2008-04-01

    An improved automatic target recognition (ATR) processing string has been developed. The overall processing string consists of pre-processing, subimage adaptive clutter filtering (SACF), normalization, detection, data regularization, feature extraction, optimal subset feature selection, feature orthogonalization and classification processing blocks. A new improvement was made to the processing string, data regularization, which entails computing the input data mean, clipping the data to a multiple of its mean and scaling it, prior to feature extraction. The classified objects of 3 distinct strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution three-frequency band sonar imagery. The ATR processing strings were individually tuned to the corresponding three-frequency band data, making use of the new processing improvement, data regularization, which resulted in a 3:1 reduction in false alarms. Two significant fusion algorithm improvements were made. First, a nonlinear 2nd order (Volterra) feature LLRT fusion algorithm was developed. Second, a repeated application of a subset Volterra feature selection / feature orthogonalization / LLRT fusion block was utilized. It was shown that cascaded Volterra feature LLRT fusion of the ATR processing strings outperforms baseline summing and single-stage Volterra feature LLRT algorithms, yielding significant improvements over the best single ATR processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate.

  7. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  8. Educational Specifications for the Pojoaque Valley Senior High School.

    ERIC Educational Resources Information Center

    Tonigan, Richard F.; And Others

    The middle school and senior high school of the Pojoaque Valley (New Mexico) School District share many facilities and services. Because of the need for expansion of facilities, some construction projects are budgeted that include remodeling the vocational building, building the music building, and adding built-in equipment to all remodeled and…

  9. A change detection algorithm for retrieving high resolution soil moisture from SMAP radar and radiometer observations

    NASA Astrophysics Data System (ADS)

    Piles, M.; Entekhabi, D.; Camps, A.

    2009-09-01

    Soil moisture is a critical hydrological variable that links the terrestrial water, energy and carbon cycles. Global and regional observations of soil moisture are needed to estimate the water and energy fluxes at the land surface, to quantify the net carbon flux in boreal landscapes, to enhance weather and climate forecast skill and to develop improved flood prediction and drought monitoring capability. Active and Passive L-band microwave remote sensing provide a unique ability to monitor global soil moisture over land surfaces with an acceptable spatial resolution and temporal frequency [1]. Radars are capable of a very high spatial resolution (~ 3km) but, since radar backscatter is hightly influenced by surface roughness, vegetation canopy structure and water content, they have a low sensitivity to soil moisture, and the algorithms developed for retrieval of soil moisture from radar backscattering are only valid in low-vegetation water content conditions [3]. In contrast, the spatial resolution of radiometers is typically low (~ 40km), they have a high sensitivity to soil moisture, and the retrieval of soil moisture from radiometers is well established [4]. To overcome the individual limitations of active and passive approaches, the Soil Moisture Active and Passive (SMAP) mission of the NASA, scheduled for launch in the 2010-2013 time frame, is combining these two technologies [2]. The SMAP mission payload consists on an approximately 40-km L-band microwave radiometer measuring hh and vv brightness temperatures and a 3-km L-band synthetic aperture radar sensing backscatter cross-sections at hh, vv and hv polarizations. It will provide global scale land surface soil moisture observations with a three day revisit time and its key derived products are: soil moisture at 40-km for hydroclimatology, obtained from the radiometer measurements; soil moisture at 10-km resolution for hydrometeorology obtained by combining the radar and radiometer measurements in a joint

  10. High-resolution over-sampling reconstruction algorithm for a microscanning thermal microscope imaging system

    NASA Astrophysics Data System (ADS)

    Gao, Meijing; Wang, Jingyuan; Xu, Wei; Guan, Congrong

    2016-05-01

    Due to environmental factors, mechanical vibration, alignment error and other factors, the micro-displacement of four collected images deviates from the standard 2 × 2 micro-scanning images in our optical micro-scanning thermal microscope imaging system. This influences the quality of the reconstructed image and the spatial resolution of the imaging system cannot be improved. To solve this problem and reduce the optical micro-scanning errors, we propose an image reconstruction method based on the principle of the Second-order Taylor series expansion. The algorithm can obtain standard 2 × 2 microsanning under-sampling images from four non-standard 2 × 2 microsanning under-sampling images and then can obtain high spatial oversample resolution image. Simulations and experiments show that the proposed technique can reduce the optical micro-scanning errors and improve the systems spatial resolution. The algorithm has low computational complexity, and it is simple and fast. Furthermore this technique can be applied to other electro-optical imaging systems to improve their resolutions.

  11. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    SciTech Connect

    Snyder, Abigail C.; Jiao, Yu

    2010-10-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  12. Range-Specific High-resolution Mesoscale Model Setup

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.

    2013-01-01

    This report summarizes the findings from an AMU task to determine the best model configuration for operational use at the ER and WFF to best predict winds, precipitation, and temperature. The AMU ran test cases in the warm and cool seasons at the ER and for the spring and fall seasons at WFF. For both the ER and WFF, the ARW core outperformed the NMM core. Results for the ER indicate that the Lin microphysical scheme and the YSU PBL scheme is the optimal model configuration for the ER. It consistently produced the best surface and upper air forecasts, while performing fairly well for the precipitation forecasts. Both the Ferrier and Lin microphysical schemes in combination with the YSU PBL scheme performed well for WFF in the spring and fall seasons. The AMU has been tasked with a follow-on modeling effort to recommended local DA and numerical forecast model design optimized for both the ER and WFF to support space launch activities. The AMU will determine the best software and type of assimilation to use, as well as determine the best grid resolution for the initialization based on spatial and temporal availability of data and the wall clock run-time of the initialization. The AMU will transition from the WRF EMS to NU-WRF, a NASA-specific version of the WRF that takes advantage of unique NASA software and datasets. 37

  13. Shift and Mean Algorithm for Functional Imaging with High Spatio-Temporal Resolution

    PubMed Central

    Rama, Sylvain

    2015-01-01

    Understanding neuronal physiology requires to record electrical activity in many small and remote compartments such as dendrites, axon or dendritic spines. To do so, electrophysiology has long been the tool of choice, as it allows recording very subtle and fast changes in electrical activity. However, electrophysiological measurements are mostly limited to large neuronal compartments such as the neuronal soma. To overcome these limitations, optical methods have been developed, allowing the monitoring of changes in fluorescence of fluorescent reporter dyes inserted into the neuron, with a spatial resolution theoretically only limited by the dye wavelength and optical devices. However, the temporal and spatial resolutive power of functional fluorescence imaging of live neurons is often limited by a necessary trade-off between image resolution, signal to noise ratio (SNR) and speed of acquisition. Here, I propose to use a Super-Resolution Shift and Mean (S&M) algorithm previously used in image computing to improve the SNR, time sampling and spatial resolution of acquired fluorescent signals. I demonstrate the benefits of this methodology using two examples: voltage imaging of action potentials (APs) in soma and dendrites of CA3 pyramidal cells and calcium imaging in the dendritic shaft and spines of CA3 pyramidal cells. I show that this algorithm allows the recording of a broad area at low speed in order to achieve a high SNR, and then pick the signal in any small compartment and resample it at high speed. This method allows preserving both the SNR and the temporal resolution of the signal, while acquiring the original images at high spatial resolution. PMID:26635526

  14. High-speed measurement algorithm for the position of holes in a large plane

    NASA Astrophysics Data System (ADS)

    Shi, Yongqiang; Sun, Changku; Wang, Peng; Wang, Zhong; Duan, Hongxu

    2012-12-01

    CMM is widely used to measure the position of the holes on the top surface of an engine block. However, CMM requires a perfect environment and cannot be applied in online measurements. Moreover, using CMM to measure the position of holes in a large plane takes more than 10 min, thus lowering its efficiency. To solve this problem, this paper presents a high-speed measurement algorithm for the position of holes in a large plane based on a flexible datum and the feature neighborhood model. First, two area CCD cameras that grab the images of the reference holes of the block are used to establish the flexible datum. Second, different mapping models are built in the neighborhood of the center of different holes. These black-box mapping models ignore the intermediate process of camera perspective projection. Finally, the feature points in the different scales of the neighborhood of the hole centers solve the mapping results. The mapping results are then weighted by using the multi-scale weighting algorithm. The calibration target is designed, but the feature points in the target are minimal. Thus, a new method to create several more feature points is designed. Compared with the measurement result of CMM, the maximum position error of the measurement system is 0.025 mm. The relative error is better than 0.025% and the standard deviation of the measurement data is less than 0.010 mm. With a conference level of 95%, the system measurement uncertainty is better than ±0.020 mm. The measuring time is less than 3 min. The position measurement scheme features high automation and high efficiency, and can be used in the position measurement of online engine block holes.

  15. Improved estimates of boreal Fire Radiative Energy using high temporal resolution data and a modified active fire detection algorithm

    NASA Astrophysics Data System (ADS)

    Barrett, Kirsten

    2016-04-01

    Reliable estimates of biomass combusted during wildfires can be obtained from satellite observations of fire radiative power (FRP). Total fire radiative energy (FRE) is typically estimated by integrating instantaneous measurements of fire radiative power (FRP) at the time of orbital satellite overpass or geostationary observation. Remotely-sensed FRP products from orbital satellites are usually global in extent, requiring several thresholding and filtering operations to reduce the number of false fire detections. Some filters required for a global product may not be appropriate to fire detection in the boreal forest resulting in errors of omission and increased data processing times. We evaluate the effect of a boreal-specific active fire detection algorithm and estimates of FRP/FRE. Boreal fires are more likely to escape detection due to lower intensity smouldering combustion and sub canopy fires, therefore improvements in boreal fire detection could substantially reduce the uncertainty of emissions from biomass combustion in the region. High temporal resolution data from geostationary satellites have led to improvements in FRE estimation in tropical and temperate forests, but such a perspective is not possible for high latitude ecosystems given the equatorial orbit of geostationary observation. The increased density of overpasses in high latitudes from polar-orbiting satellites, however, may provide adequate temporal sampling for estimating FRE.

  16. Valuing the Child Health Utility 9D: Using profile case best worst scaling methods to develop a new adolescent specific scoring algorithm.

    PubMed

    Ratcliffe, Julie; Huynh, Elisabeth; Chen, Gang; Stevens, Katherine; Swait, Joffre; Brazier, John; Sawyer, Michael; Roberts, Rachel; Flynn, Terry

    2016-05-01

    In contrast to the recent proliferation of studies incorporating ordinal methods to generate health state values from adults, to date relatively few studies have utilised ordinal methods to generate health state values from adolescents. This paper reports upon a study to apply profile case best worst scaling methods to derive a new adolescent specific scoring algorithm for the Child Health Utility 9D (CHU9D), a generic preference based instrument that has been specifically designed for the estimation of quality adjusted life years for the economic evaluation of health care treatment and preventive programs targeted at young people. A survey was developed for administration in an on-line format in which consenting community based Australian adolescents aged 11-17 years (N = 1982) indicated the best and worst features of a series of 10 health states derived from the CHU9D descriptive system. The data were analyzed using latent class conditional logit models to estimate values (part worth utilities) for each level of the nine attributes relating to the CHU9D. A marginal utility matrix was then estimated to generate an adolescent-specific scoring algorithm on the full health = 1 and dead = 0 scale required for the calculation of QALYs. It was evident that different decision processes were being used in the best and worst choices. Whilst respondents appeared readily able to choose 'best' attribute levels for the CHU9D health states, a large amount of random variability and indeed different decision rules were evident for the choice of 'worst' attribute levels, to the extent that the best and worst data should not be pooled from the statistical perspective. The optimal adolescent-specific scoring algorithm was therefore derived using data obtained from the best choices only. The study provides important insights into the use of profile case best worst scaling methods to generate health state values with adolescent populations. PMID:27060541

  17. High Specific Power Motors in LN2 and LH2

    NASA Technical Reports Server (NTRS)

    Brown, Gerald V.; Jansen, Ralph H.; Trudell, Jeffrey J.

    2007-01-01

    A switched reluctance motor has been operated in liquid nitrogen (LN2) with a power density as high as that reported for any motor or generator. The high performance stems from the low resistivity of Cu at LN2 temperature and from the geometry of the windings, the combination of which permits steady-state rms current density up to 7000 A/cm2, about 10 times that possible in coils cooled by natural convection at room temperature. The Joule heating in the coils is conducted to the end turns for rejection to the LN2 bath. Minimal heat rejection occurs in the motor slots, preserving that region for conductor. In the end turns, the conductor layers are spaced to form a heat-exchanger-like structure that permits nucleate boiling over a large surface area. Although tests were performed in LN2 for convenience, this motor was designed as a prototype for use with liquid hydrogen (LH2) as the coolant. End-cooled coils would perform even better in LH2 because of further increases in copper electrical and thermal conductivities. Thermal analyses comparing LN2 and LH2 cooling are presented verifying that end-cooled coils in LH2 could be either much longer or could operate at higher current density without thermal runaway than in LN2.

  18. High Specific Power Motors in LN2 and LH2

    NASA Technical Reports Server (NTRS)

    Brown, Gerald V.; Jansen, Ralph H.; Trudell, Jeffrey J.

    2007-01-01

    A switched reluctance motor has been operated in liquid nitrogen (LN2) with a power density as high as that reported for any motor or generator. The high performance stems from the low resistivity of Cu at LN2 temperature and from the geometry of the windings, the combination of which permits steady-state rms current density up to 7000 A/sq cm, about 10 times that possible in coils cooled by natural convection at room temperature. The Joule heating in the coils is conducted to the end turns for rejection to the LN2 bath. Minimal heat rejection occurs in the motor slots, preserving that region for conductor. In the end turns, the conductor layers are spaced to form a heat-exchanger-like structure that permits nucleate boiling over a large surface area. Although tests were performed in LN2 for convenience, this motor was designed as a prototype for use with liquid hydrogen (LH2) as the coolant. End-cooled coils would perform even better in LH2 because of further increases in copper electrical and thermal conductivities. Thermal analyses comparing LN2 and LH2 cooling are presented verifying that end-cooled coils in LH2 could be either much longer or could operate at higher current density without thermal runaway than in LN2.

  19. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  20. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  1. Edge Polynomial Fractal Compression Algorithm for High Quality Video Transmission. Final report

    SciTech Connect

    Lin, Freddie

    1999-06-01

    In this final report, Physical Optics Corporation (POC) provides a review of its Edge Polynomial Autonomous Compression (EPAC) technology. This project was undertaken to meet the need for low bandwidth transmission of full-motion video images. In addition, this report offers a synopsis of the logical data representation study that was performed to compress still images and video. The mapping singularities and polynomial representation of 3-D surfaces were found to be ideal for very high image compression. Our efforts were then directed to extending the EPAC algorithm for the motion of singularities by tracking the 3-D coordinates of characteristic points and the development of system components. Finally, we describe the integration of the software with the hardware components. This process consists of acquiring and processing each separate camera view, combining the information from different cameras to calculate the location of an object in three dimensions, and tracking the information history and the behavior of the objects.

  2. Shock focusing flow field simulated by a high-resolution numerical algorithm

    NASA Astrophysics Data System (ADS)

    Jung, Y. G.; Chang, K. S.

    2012-11-01

    Shock-focusing concave reflector is a very simple and effective tool to obtain a high-pressure pulse wave near the physical focal point. In the past, many optical images were obtained through experimental studies. However, measurement of field variables is not easy because the phenomenon is of short duration and the magnitude of shock waves is varied from pulse to pulse due to poor reproducibility. Using a wave propagation algorithm and the Cartesian embedded boundary method, we have successfully obtained numerical schlieren images that resemble the experimental results. By the numerical results, various field variables, such as pressure, density and vorticity, become available for the better understanding and design of shock focusing devices.

  3. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  4. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  5. Theory and algorithms for a quasi-optical launcher design method for high-frequency gyrotrons

    NASA Astrophysics Data System (ADS)

    Ungku Farid, Ungku Fazri

    Gyrotrons are vacuum tubes that can generate high amounts of coherent high-frequency microwave radiation used for plasma heating, breakdown and current drive, and other applications. The gyrotron output power is not directly usable, and must be converted to either a free-space circular TEM00 Gaussian beam or a HE11 corrugated waveguide mode by employing mode converters. Quasi-optical mode converters (QOMC) achieve this by utilizing a launcher (a type of waveguide antenna) and a mirror system. Adding perturbations to smooth-wall launchers can produce a better Gaussian shaped radiation pattern with smaller side lobes and less diffraction, and this improvement leads to higher power efficiency in the QOMC. The oversize factor (OF) is defined as the ratio of the operating to cutoff frequency of the launcher, and the higher this value is, the more difficult it is to obtain good launcher designs. This thesis presents a new method for the design of any perturbed-wall TE 0n launcher that is not too highly oversized, and it is an improvement over previous launcher design methods that do not work well for highly oversized launchers. This new launcher design method is a fusion of three different methods, which are the Iterative Stratton-Chu algorithm (used for fast and accurate waveguide field propagations), the Katsenelenbaum-Semenov phase-correcting optimization algorithm, and Geometrical Optics. Three different TE02 launchers were designed using this new method, 1) a highly oversized (2.49 OF) 60 GHz launcher as proof-of-method, 2) a highly oversized (2.66 OF) 28 GHz launcher for possible use in the quasihelically symmetric stellarator (HSX) transmission line at the University of Wisconsin -- Madison, and 3) a compact internal 94 GHz 1.54 OF launcher for use in a compact gyrotron. Good to excellent results were achieved, and all launcher designs were independently verified with Surf3d, a method-of-moments based software. Additionally, the corresponding mirror system for

  6. Construction of high-order force-gradient algorithms for integration of motion in classical and quantum systems.

    PubMed

    Omelyan, I P; Mryglod, I M; Folk, R

    2002-08-01

    A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach. PMID:12241312

  7. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    SciTech Connect

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.

  8. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  9. [The Change Detection of High Spatial Resolution Remotely Sensed Imagery Based on OB-HMAD Algorithm and Spectral Features].

    PubMed

    Chen, Qiang; Chen, Yun-hao; Jiang, Wei-guo

    2015-06-01

    The high spatial resolution remotely sensed imagery has abundant detailed information of earth surface, and the multi-temporal change detection for the high resolution remotely sensed imagery can realize the variations of geographical unit. In terms of the high spatial resolution remotely sensed imagery, the traditional remote sensing change detection algorithms have obvious defects. In this paper, learning from the object-based image analysis idea, we proposed a semi-automatic threshold selection algorithm named OB-HMAD (object-based-hybrid-MAD), on the basis of object-based image analysis and multivariate alternative detection algorithm (MAD), which used the spectral features of remotely sensed imagery into the field of object-based change detection. Additionally, OB-HMAD algorithm has been compared with other the threshold segmentation algorithms by the change detection experiment. Firstly, we obtained the image object by the multi-solution segmentation algorithm. Secondly, we got the object-based difference image object using MAD and minimum noise fraction rotation (MNF) for improving the SNR of the image object. Then, the change objects or area are classified using histogram curvature analysis (HCA) method for the semi-automatic threshold selection, which determined the threshold by calculated the maximum value of curvature of the histogram, so the HCA algorithm has better automation than other threshold segmentation algorithms. Finally, the change detection results are validated using confusion matrix with the field sample data. Worldview-2 imagery of 2012 and 2013 in case study of Beijing were used to validate the proposed OB-HMAD algorithm. The experiment results indicated that OB-HMAD algorithm which integrated the multi-channel spectral information could be effectively used in multi-temporal high resolution remotely sensed imagery change detection, and it has basically solved the "salt and pepper" problem which always exists in the pixel-based change

  10. Evaluation of an algorithm for integrated management of childhood illness in an area of Kenya with high malaria transmission.

    PubMed Central

    Perkins, B. A.; Zucker, J. R.; Otieno, J.; Jafari, H. S.; Paxton, L.; Redd, S. C.; Nahlen, B. L.; Schwartz, B.; Oloo, A. J.; Olango, C.; Gove, S.; Campbell, C. C.

    1997-01-01

    In 1993, the World Health Organization completed the development of a draft algorithm for the integrated management of childhood illness (IMCI), which deals with acute respiratory infections, diarrhoea, malaria, measles, ear infections, malnutrition, and immunization status. The present study compares the performance of a minimally trained health worker to make a correct diagnosis using the draft IMCI algorithm with that of a fully trained paediatrician who had laboratory and radiological support. During the 14-month study period, 1795 children aged between 2 months and 5 years were enrolled from the outpatient paediatric clinic of Siaya District Hospital in western Kenya; 48% were female and the median age was 13 months. Fever, cough and diarrhoea were the most common chief complaints presented by 907 (51%), 395 (22%), and 199 (11%) of the children, respectively; 86% of the chief complaints were directly addressed by the IMCI algorithm. A total of 1210 children (67%) had Plasmodium falciparum infection and 1432 (80%) met the WHO definition for anaemia (haemoglobin < 11 g/dl). The sensitivities and specificities for classification of illness by the health worker using the IMCI algorithm compared to diagnosis by the physician were: pneumonia (97% sensitivity, 49% specificity); dehydration in children with diarrhoea (51%, 98%); malaria (100%, 0%); ear problem (98%, 2%); nutritional status (96%, 66%); and need for referral (42%, 94%). Detection of fever by laying a hand on the forehead was both sensitive and specific (91%, 77%). There was substantial clinical overlap between pneumonia and malaria (n = 895), and between malaria and malnutrition (n = 811). Based on the initial analysis of these data, some changes were made in the IMCI algorithm. This study provides important technical validation of the IMCI algorithm, but the performance of health workers should be monitored during the early part of their IMCI training. PMID:9529716

  11. Examination of a genetic algorithm for the application in high-throughput downstream process development.

    PubMed

    Treier, Katrin; Berg, Annette; Diederich, Patrick; Lang, Katharina; Osberghaus, Anna; Dismer, Florian; Hubbuch, Jürgen

    2012-10-01

    Compared to traditional strategies, application of high-throughput experiments combined with optimization methods can potentially speed up downstream process development and increase our understanding of processes. In contrast to the method of Design of Experiments in combination with response surface analysis (RSA), optimization approaches like genetic algorithms (GAs) can be applied to identify optimal parameter settings in multidimensional optimizations tasks. In this article the performance of a GA was investigated applying parameters applicable in high-throughput downstream process development. The influence of population size, the design of the initial generation and selection pressure on the optimization results was studied. To mimic typical experimental data, four mathematical functions were used for an in silico evaluation. The influence of GA parameters was minor on landscapes with only one optimum. On landscapes with several optima, parameters had a significant impact on GA performance and success in finding the global optimum. Premature convergence increased as the number of parameters and noise increased. RSA was shown to be comparable or superior for simple systems and low to moderate noise. For complex systems or high noise levels, RSA failed, while GA optimization represented a robust tool for process optimization. Finally, the effect of different objective functions is shown exemplarily for a refolding optimization of lysozyme. PMID:22700464

  12. A vectorized track finding and fitting algorithm in experimental high energy physics using a cyber-205

    NASA Astrophysics Data System (ADS)

    Georgiopoulos, C. H.; Goldman, J. H.; Hodous, M. F.

    1987-11-01

    We report on a fully vectorized track finding and fitting algorithm that has been used to reconstruct charged particle trajectories in a multiwire chamber system. This algorithm is currently used for data analysis of the E-711 experiment a Fermilab. The program is written for a CYBER 205 on which the average event takes 13.5 ms to process compared to 6.7 s for an optimized scalar algorithm on a VAX-11/780.

  13. Extended nonlinear chirp scaling algorithm for highly squinted missile-borne synthetic aperture radar with diving acceleration

    NASA Astrophysics Data System (ADS)

    Liu, Rengli; Wang, Yanfei

    2016-04-01

    An extended nonlinear chirp scaling (NLCS) algorithm is proposed to process data of highly squinted, high-resolution, missile-borne synthetic aperture radar (SAR) diving with a constant acceleration. Due to the complex diving movement, the traditional signal model and focusing algorithm are no longer suited for missile-borne SAR signal processing. Therefore, an accurate range equation is presented, named as the equivalent hyperbolic range model (EHRM), which is more accurate and concise compared with the conventional fourth-order polynomial range equation. Based on the EHRM, a two-dimensional point target reference spectrum is derived, and an extended NLCS algorithm for missile-borne SAR image formation is developed. In the algorithm, a linear range walk correction is used to significantly remove the range-azimuth cross coupling, and an azimuth NLCS processing is adopted to solve the azimuth space variant focusing problem. Moreover, the operations of the proposed algorithm are carried out without any interpolation, thus having small computational loads. Finally, the simulation results and real-data processing results validate the proposed focusing algorithm.

  14. High-order derivative spectroscopy for selecting spectral regions and channels for remote sensing algorithm development

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.

    1999-12-01

    A remote sensing reflectance model, which describes the transfer of irradiant light within a plant canopy or water column has previously been used to simulate the nadir viewing reflectance of vegetation canopies and leaves under solar induced or an artificial light source and the water surface reflectance. Wavelength dependent features such as canopy reflectance leaf absorption and canopy bottom reflectance as well as water absorption and water bottom reflectance have been used to simulate or generate synthetic canopy and water surface reflectance signatures. This paper describes how derivative spectroscopy can be utilized to invert the synthetic or modeled as well as measured reflectance signatures with the goal of selecting the optimal spectral channels or regions of these environmental media. Specifically, in this paper synthetic and measured reflectance signatures are used for selecting vegetative dysfunction variables for different plant species. The measured reflectance signatures as well as model derived or synthetic signatures are processed using extremely fast higher order derivative processing techniques which filter the synthetic/modeled or measured spectra and automatically selects the optimal channels for automatic and direct algorithm application. The higher order derivative filtering technique makes use of a translating and dilating, derivative spectroscopy signal processing (TDDS-SPR) approach based upon remote sensing science and radiative transfer theory. Thus the technique described, unlike other signal processing techniques being developed for hyperspectral signatures and associated imagery, is based upon radiative transfer theory instead of statistical or purely mathematical operational techniques such as wavelets.

  15. A robust jet reconstruction algorithm for high-energy lepton colliders

    NASA Astrophysics Data System (ADS)

    Boronat, M.; Fuster, J.; García, I.; Ros, E.; Vos, M.

    2015-11-01

    We propose a new sequential jet reconstruction algorithm for future lepton colliders at the energy frontier. The Valencia algorithm combines the natural distance criterion for lepton colliders with the greater robustness against backgrounds of algorithms adapted to hadron colliders. Results on a detailed Monte Carlo simulation of t t bar and ZZ production at future linear e+e- colliders (ILC and CLIC) with a realistic level of background overlaid, show that it achieves better performance in the presence of background than the classical algorithms used at previous e+e- colliders.

  16. Highly specific expression of luciferase gene in lungs of naive nude mice directed by prostate-specific antigen promoter

    SciTech Connect

    Li Hongwei; Li Jinzhong; Helm, Gregory A.; Pan Dongfeng . E-mail: Dongfeng_pan@yahoo.com

    2005-09-09

    PSA promoter has been demonstrated the utility for tissue-specific toxic gene therapy in prostate cancer models. Characterization of foreign gene overexpression in normal animals elicited by PSA promoter should help evaluate therapy safety. Here we constructed an adenovirus vector (AdPSA-Luc), containing firefly luciferase gene under the control of the 5837 bp long prostate-specific antigen promoter. A charge coupled device video camera was used to non-invasively image expression of firefly luciferase in nude mice on days 3, 7, 11 after injection of 2 x 10{sup 9} PFU of AdPSA-Luc virus via tail vein. The result showed highly specific expression of the luciferase gene in lungs of mice from day 7. The finding indicates the potential limitations of the suicide gene therapy of prostate cancer based on selectivity of PSA promoter. By contrary, it has encouraging implications for further development of vectors via PSA promoter to enable gene therapy for pulmonary diseases.

  17. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality

    PubMed Central

    Wang, Xueyi

    2011-01-01

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 106 records and 104 dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces. PMID:22247818

  18. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    SciTech Connect

    Xiao, Jianyuan; Qin, Hong; Liu, Jian; He, Yang; Zhang, Ruili; Sun, Yajuan

    2015-11-01

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.

  19. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    SciTech Connect

    Xiao, Jianyuan; Liu, Jian; He, Yang; Zhang, Ruili; Qin, Hong; Sun, Yajuan

    2015-11-15

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.

  20. High-performance liquid chromatography coupled to mass spectrometry methodology for analyzing site-specific N-glycosylation patterns.

    PubMed

    Ozohanics, Oliver; Turiák, Lilla; Puerta, Angel; Vékey, Károly; Drahos, László

    2012-10-12

    Analysis of protein glycosylation is a major challenge in biochemistry, here we present a nano-UHPLC-MS(MS) based methodology, which is suitable to determine site-specific N-glycosylation patterns. A few pmol glycoprotein is sufficient to determine glycosylation patterns (which opens the way for biomedical applications) and requires at least two separate chromatographic runs. One is using tandem mass spectrometry (for structure identification); the other single stage MS mode (for semi-quantitation). Analysis relies heavily on data processing. The previously developed GlycoMiner algorithm and software was used to identify glycopeptides in MS/MS spectra. We have developed a new algorithm and software (GlycoPattern), which evaluates single stage mass spectra, both in terms of glycopeptide identification (for minor glycoforms) and semi-quantitation. Identification of glycopeptide structures based on MS/MS analysis has a false positive rate of 1%. Minor glycoforms (when sensitivity is insufficient to obtain an MS/MS spectrum) can be identified in single stage MS using GlycoPattern; but in such a case the false positive rate is increased to 5%. Glycosylation is studied at the glycopeptide level (i.e. following proteolytic digestion). This way the sugar chains can be unequivocally assigned to a given glycosylation site (site-specific glycosylation pattern). Glycopeptide analysis has the further advantage that protein-specific glycosylation patterns can be identified in complex mixtures and not only in purified samples. This opens the way for medium high throughput analysis of glycosylation. Specific examples of site-specific glycosylation patterns of alpha-1-acid glycoprotein, haptoglobin and on a therapeutic monoclonal antibody, Infliximab are also discussed. PMID:22677411

  1. Synthesis of a high specific activity methyl sulfone tritium isotopologue of fevipiprant (NVP-QAW039).

    PubMed

    Luu, Van T; Goujon, Jean-Yves; Meisterhans, Christian; Frommherz, Matthias; Bauer, Carsten

    2015-05-15

    The synthesis of a triple tritiated isotopologue of the CRTh2 antagonist NVP-QAW039 (fevipiprant) with a specific activity >3 TBq/mmol is described. Key to the high specific activity is the methylation of a bench-stable dimeric disulfide precursor that is in situ reduced to the corresponding thiol monomer and methylated with [(3)H3]MeONos having per se a high specific activity. The high specific activity of the tritiated active pharmaceutical ingredient obtained by a build-up approach is discussed in the light of the specific activity usually to be expected if hydrogen tritium exchange methods were applied. PMID:25881897

  2. Analyses, algorithms, and computations for models of high-temperature superconductivity. Final report

    SciTech Connect

    Du, Q.

    1997-06-01

    Under the sponsorship of the Department of Energy, the authors have achieved significant progress in the modeling, analysis, and computation of superconducting phenomena. The work so far has focused on mezoscale models as typified by the celebrated Ginzburg-Landau equations; these models are intermediate between the microscopic models (that can be used to understand the basic structure of superconductors and of the atomic and sub-atomic behavior of these materials) and the macroscale, or homogenized, models (that can be of use for the design of devices). The models they have considered include a time dependent Ginzburg-Landau model, a variable thickness thin film model, models for high values of the Ginzburg-landau parameter, models that account for normal inclusions and fluctuations and Josephson effects, and the anisotropic ginzburg-Landau and Lawrence-Doniach models for layered superconductors, including those with high critical temperatures. In each case, they have developed or refined the models, derived rigorous mathematical results that enhance the state of understanding of the models and their solutions, and developed, analyzed, and implemented finite element algorithms for the approximate solution of the model equations.

  3. Analyses, algorithms, and computations for models of high-temperature superconductivity. Final technical report

    SciTech Connect

    Gunzburger, M.D.; Peterson, J.S.

    1998-04-01

    Under the sponsorship of the Department of Energy, the authors have achieved significant progress in the modeling, analysis, and computation of superconducting phenomena. Their work has focused on mezoscale models as typified by the celebrated ginzburg-Landau equations; these models are intermediate between the microscopic models (that can be used to understand the basic structure of superconductors and of the atomic and sub-atomic behavior of these materials) and the macroscale, or homogenized, models (that can be of use for the design of devices). The models the authors have considered include a time dependent Ginzburg-Landau model, a variable thickness thin film model, models for high values of the Ginzburg-Landau parameter, models that account for normal inclusions and fluctuations and Josephson effects, and the anisotropic Ginzburg-Landau and Lawrence-Doniach models for layered superconductors, including those with high critical temperatures. In each case, they have developed or refined the models, derived rigorous mathematical results that enhance the state of understanding of the models and their solutions, and developed, analyzed, and implemented finite element algorithms for the approximate solution of the model equations.

  4. Genetic algorithm-based feature selection in high-resolution NMR spectra

    PubMed Central

    Cho, Hyun-Woo; Jeong, Myong K.; Park, Youngja; Ziegler, Thomas R.; Jones, Dean P.

    2011-01-01

    High-resolution nuclear magnetic resonance (NMR) spectroscopy has provided a new means for detection and recognition of metabolic changes in biological systems in response to pathophysiological stimuli and to the intake of toxins or nutrition. To identify meaningful patterns from NMR spectra, various statistical pattern recognition methods have been applied to reduce their complexity and uncover implicit metabolic patterns. In this paper, we present a genetic algorithm (GA)-based feature selection method to determine major metabolite features to play a significant role in discrimination of samples among different conditions in high-resolution NMR spectra. In addition, an orthogonal signal filter was employed as a preprocessor of NMR spectra in order to remove any unwanted variation of the data that is unrelated to the discrimination of different conditions. The results of k-nearest neighbors and the partial least squares discriminant analysis of the experimental NMR spectra from human plasma showed the potential advantage of the features obtained from GA-based feature selection combined with an orthogonal signal filter. PMID:21472035

  5. An inverse kinematics algorithm for a highly redundant variable-geometry-truss manipulator

    NASA Technical Reports Server (NTRS)

    Naccarato, Frank; Hughes, Peter

    1989-01-01

    A new class of robotic arm consists of a periodic sequence of truss substructures, each of which has several variable-length members. Such variable-geometry-truss manipulator (VGTMs) are inherently highly redundant and promise a significant increase in dexterity over conventional anthropomorphic manipulators. This dexterity may be exploited for both obstacle avoidance and controlled deployment in complex workspaces. The inverse kinematics problem for such unorthodox manipulators, however, becomes complex because of the large number of degrees of freedom, and conventional solutions to the inverse kinematics problem become inefficient because of the high degree of redundancy. A solution is presented to this problem based on a spline-like reference curve for the manipulator's shape. Such an approach has a number of advantages: (1) direct, intuitive manipulation of shape; (2) reduced calculation time; and (3) direct control over the effective degree of redundancy of the manipulator. Furthermore, although the algorithm was developed primarily for variable-geometry-truss manipulators, it is general enough for application to a number of manipulator designs.

  6. comets (Constrained Optimization of Multistate Energies by Tree Search): A Provable and Efficient Protein Design Algorithm to Optimize Binding Affinity and Specificity with Respect to Sequence.

    PubMed

    Hallen, Mark A; Donald, Bruce R

    2016-05-01

    Practical protein design problems require designing sequences with a combination of affinity, stability, and specificity requirements. Multistate protein design algorithms model multiple structural or binding "states" of a protein to address these requirements. comets provides a new level of versatile, efficient, and provable multistate design. It provably returns the minimum with respect to sequence of any desired linear combination of the energies of multiple protein states, subject to constraints on other linear combinations. Thus, it can target nearly any combination of affinity (to one or multiple ligands), specificity, and stability (for multiple states if needed). Empirical calculations on 52 protein design problems showed comets is far more efficient than the previous state of the art for provable multistate design (exhaustive search over sequences). comets can handle a very wide range of protein flexibility and can enumerate a gap-free list of the best constraint-satisfying sequences in order of objective function value. PMID:26761641

  7. A non-device-specific approach to display characterization based on linear, nonlinear, and hybrid search algorithms.

    PubMed

    Ban, Hiroshi; Yamamoto, Hiroki

    2013-01-01

    In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free. PMID:23729771

  8. Offset time decision (OTD) algorithm for guaranteeing the requested QoS of high priority traffic in OBS networks

    NASA Astrophysics Data System (ADS)

    So, Won-Ho; Cha, Yun-Ho; Roh, Sun-Sik; Kim, Young-Chon

    2001-10-01

    In this paper, we propose the Offset Time Decision (OTD) algorithm for supporting the QoS in optical networks based on Optical Burst Switching (OBS), which is the new switching paradigm, and evaluate the performance of the OTD algorithm. The proposed algorithm can decide a reasonable offset time to guarantee the Burst Loss Rate (BLR) of high priority traffic by considering traffic load of network and the number of wavelengths. In order to design this effective OTD algorithm, firstly we illustrate the new burst loss formula, which includes the effect of offset time of high priority class. As the decision of offset time corresponding to the requested BLR, however, should use the reversed formula of new one, we are not able to use it without any changes. Thus, we define the Heuristic Loss Formula (HLF) that is based on the new burst loss formula and the proportional equation considering its characteristics. Finally we show the OTD algorithm to decide the reasonable offset time by using HLF. The simulation result shows that the requested BLR of high priority traffic is guaranteed under various traffic load.

  9. Genetic algorithm based optimization of pulse profile for MOPA based high power fiber lasers

    NASA Astrophysics Data System (ADS)

    Zhang, Jiawei; Tang, Ming; Shi, Jun; Fu, Songnian; Li, Lihua; Liu, Ying; Cheng, Xueping; Liu, Jian; Shum, Ping

    2015-03-01

    Although the Master Oscillator Power-Amplifier (MOPA) based fiber laser has received much attention for laser marking process due to its large tunabilty of pulse duration (from 10ns to 1ms), repetition rate (100Hz to 500kHz), high peak power and extraordinary heat dissipating capability, the output pulse deformation due to the saturation effect of fiber amplifier is detrimental for many applications. We proposed and demonstrated that, by utilizing Genetic algorithm (GA) based optimization technique, the input pulse profile from the master oscillator (current-driven laser diode) could be conveniently optimized to achieve targeted output pulse shape according to real parameters' constraints. In this work, an Yb-doped high power fiber amplifier is considered and a 200ns square shaped pulse profile is the optimization target. Since the input pulse with longer leading edge and shorter trailing edge can compensate the saturation effect, linear, quadratic and cubic polynomial functions are used to describe the input pulse with limited number of unknowns(<5). Coefficients of the polynomial functions are the optimization objects. With reasonable cost and hardware limitations, the cubic input pulse with 4 coefficients is found to be the best as the output amplified pulse can achieve excellent flatness within the square shape. Considering the bandwidth constraint of practical electronics, we examined high-frequency component cut-off effect of input pulses and found that the optimized cubic input pulses with 300MHz bandwidth is still quite acceptable to satisfy the requirement for the amplified output pulse and it is feasible to establish such a pulse generator in real applications.

  10. A comparative study of three simulation optimization algorithms for solving high dimensional multi-objective optimization problems in water resources

    NASA Astrophysics Data System (ADS)

    Schütze, Niels; Wöhling, Thomas; de Play, Michael

    2010-05-01

    Some real-world optimization problems in water resources have a high-dimensional space of decision variables and more than one objective function. In this work, we compare three general-purpose, multi-objective simulation optimization algorithms, namely NSGA-II, AMALGAM, and CMA-ES-MO when solving three real case Multi-objective Optimization Problems (MOPs): (i) a high-dimensional soil hydraulic parameter estimation problem; (ii) a multipurpose multi-reservoir operation problem; and (iii) a scheduling problem in deficit irrigation. We analyze the behaviour of the three algorithms on these test problems considering their formulations ranging from 40 up to 120 decision variables and 2 to 4 objectives. The computational effort required by each algorithm in order to reach the true Pareto front is also analyzed.

  11. Optimized design on condensing tubes high-speed TIG welding technology magnetic control based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Lin; Chang, Yunlong; Li, Yingmin; Lu, Ming

    2013-05-01

    An orthogonal experiment was conducted by the means of multivariate nonlinear regression equation to adjust the influence of external transverse magnetic field and Ar flow rate on welding quality in the process of welding condenser pipe by high-speed argon tungsten-arc welding (TIG for short). The magnetic induction and flow rate of Ar gas were used as optimum variables, and tensile strength of weld was set to objective function on the base of genetic algorithm theory, and then an optimal design was conducted. According to the request of physical production, the optimum variables were restrained. The genetic algorithm in the MATLAB was used for computing. A comparison between optimum results and experiment parameters was made. The results showed that the optimum technologic parameters could be chosen by the means of genetic algorithm with the conditions of excessive optimum variables in the process of high-speed welding. And optimum technologic parameters of welding coincided with experiment results.

  12. K-Boost: a scalable algorithm for high-quality clustering of microarray gene expression data.

    PubMed

    Geraci, Filippo; Leoncini, Mauro; Montangero, Manuela; Pellegrini, Marco; Renda, M Elena

    2009-06-01

    Microarray technology for profiling gene expression levels is a popular tool in modern biological research. Applications range from tissue classification to the detection of metabolic networks, from drug discovery to time-critical personalized medicine. Given the increase in size and complexity of the data sets produced, their analysis is becoming problematic in terms of time/quality trade-offs. Clustering genes with similar expression profiles is a key initial step for subsequent manipulations and the increasing volumes of data to be analyzed requires methods that are at the same time efficient (completing an analysis in minutes rather than hours) and effective (identifying significant clusters with high biological correlations). In this paper, we propose K-Boost, a clustering algorithm based on a combination of the furthest-point-first (FPF) heuristic for solving the metric k-center problem, a stability-based method for determining the number of clusters, and a k-means-like cluster refinement. K-Boost runs in O (|N| x k) time, where N is the input matrix and k is the number of proposed clusters. Experiments show that this low complexity is usually coupled with a very good quality of the computed clusterings, which we measure using both internal and external criteria. Supporting data can be found as online Supplementary Material at www.liebertonline.com. PMID:19522668

  13. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  14. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    PubMed Central

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  15. Optimal band selection for high dimensional remote sensing data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfeng; Sun, Quan; Li, Jonathan

    2009-06-01

    A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.

  16. Complexity optimization and high-throughput low-latency hardware implementation of a multi-electrode spike-sorting algorithm.

    PubMed

    Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix

    2015-03-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989

  17. Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams

    SciTech Connect

    Papanikolaou, Niko; Stathakis, Sotirios

    2009-10-15

    Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.

  18. Highly specific and potently activating Vγ9Vδ2-T cell specific nanobodies for diagnostic and therapeutic applications.

    PubMed

    de Bruin, Renée C G; Lougheed, Sinéad M; van der Kruk, Liza; Stam, Anita G; Hooijberg, Erik; Roovers, Rob C; van Bergen En Henegouwen, Paul M P; Verheul, Henk M W; de Gruijl, Tanja D; van der Vliet, Hans J

    2016-08-01

    Vγ9Vδ2-T cells constitute the predominant subset of γδ-T cells in human peripheral blood and have been shown to play an important role in antimicrobial and antitumor immune responses. Several efforts have been initiated to exploit these cells for cancer immunotherapy, e.g. by using phosphoantigens, adoptive cell transfer, and by a bispecific monoclonal antibody based approach. Here, we report the generation of a novel set of Vγ9Vδ2-T cell specific VHH (or nanobody). VHH have several advantages compared to conventional antibodies related to their small size, stability, ease of generating multispecific molecules and low immunogenicity. With high specificity and affinity, the anti-Vγ9Vδ2-T cell receptor VHHs are shown to be useful for FACS, MACS and immunocytochemistry. In addition, some VHH were found to specifically activate Vγ9Vδ2-T cells. Besides being of possible immunotherapeutic value, these single domain antibodies will be of great value in the further study of this important immune effector cell subset. PMID:27373969

  19. Atorvastatin ameliorates endothelium-specific insulin resistance induced by high glucose combined with high insulin.

    PubMed

    Yang, Ou; Li, Jinliang; Chen, Haiyan; Li, Jie; Kong, Jian

    2016-09-01

    The aim of the present study was to establish an endothelial cell model of endothelium-specific insulin resistance to evaluate the effect of atorvastatin on insulin resistance-associated endothelial dysfunction and to identify the potential pathway responsible for its action. Cultured human umbilical vein endothelial cells (HUVECs) were pretreated with different concentrations of glucose with, or without, 10‑5 M insulin for 24 h, following which the cells were treated with atorvastatin. The tyrosine phosphorylation of insulin receptor (IR) and insulin receptor substrate-1 (IRS‑1), the production of nitric oxide (NO), the activity and phosphorylation level of endothelial NO synthase (eNOS) on serine1177, and the mRNA levels of endothelin‑1 (ET‑1) were assessed during the experimental procedure. Treatment of the HUVECs with 30 mM glucose and 10‑5 M insulin for 24 h impaired insulin signaling, with reductions in the tyrosine phosphorylation of IR and protein expression of IRS‑1 by almost 75 and 65%, respectively. This, in turn, decreased the activity and phosphorylation of eNOS on serine1177, and reduced the production of NO by almost 80%. By contrast, the mRNA levels of ET‑1 were upregulated. All these changes were ameliorated by atorvastatin. Taken together, these results demonstrated that high concentrations of glucose and insulin impaired insulin signaling leading to endothelial dysfunction, and that atorvastatin ameliorated these changes, acting primarily through the phosphatidylinositol 3-kinase/Akt/eNOS signaling pathway. PMID:27484094

  20. Design and Implementation of High-Speed Input-Queued Switches Based on a Fair Scheduling Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Qingsheng; Zhao, Hua-An

    To increase both the capacity and the processing speed for input-queued (IQ) switches, we proposed a fair scalable scheduling architecture (FSSA). By employing FSSA comprised of several cascaded sub-schedulers, a large-scale high performance switches or routers can be realized without the capacity limitation of monolithic device. In this paper, we present a fair scheduling algorithm named FSSA_DI based on an improved FSSA where a distributed iteration scheme is employed, the scheduler performance can be improved and the processing time can be reduced as well. Simulation results show that FSSA_DI achieves better performance on average delay and throughput under heavy loads compared to other existing algorithms. Moreover, a practical 64 × 64 FSSA using FSSA_DI algorithm is implemented by four Xilinx Vertex-4 FPGAs. Measurement results show that the data rates of our solution can be up to 800Mbps and the tradeoff between performance and hardware complexity has been solved peacefully.

  1. Fast and optimal multiframe blind deconvolution algorithm for high-resolution ground-based imaging of space objects.

    PubMed

    Matson, Charles L; Borelli, Kathy; Jefferies, Stuart; Beckner, Charles C; Hege, E Keith; Lloyd-Hart, Michael

    2009-01-01

    We report a multiframe blind deconvolution algorithm that we have developed for imaging through the atmosphere. The algorithm has been parallelized to a significant degree for execution on high-performance computers, with an emphasis on distributed-memory systems so that it can be hosted on commodity clusters. As a result, image restorations can be obtained in seconds to minutes. We have compared and quantified the quality of its image restorations relative to the associated Cramér-Rao lower bounds (when they can be calculated). We describe the algorithm and its parallelization in detail, demonstrate the scalability of its parallelization across distributed-memory computer nodes, discuss the results of comparing sample variances of its output to the associated Cramér-Rao lower bounds, and present image restorations obtained by using data collected with ground-based telescopes. PMID:19107159

  2. Two-channel algorithm for single-shot, high-resolution measurement of optical wavefronts using two image sensors.

    PubMed

    Nozawa, Jin; Okamoto, Atsushi; Shibukawa, Atsushi; Takabayashi, Masanori; Tomita, Akihisa

    2015-10-10

    We propose a two-channel holographic diversity interferometer (2ch-HDI) system for single-shot and highly accurate measurements of complex amplitude fields with a simple optical setup. In this method, two phase-shifted interference patterns are generated, without requiring a phase-shifting device, by entering a circularly polarized reference beam into a polarizing beam splitter, and the resulting patterns are captured simultaneously using two image sensors. However, differences in the intensity distributions of the two image sensors may lead to serious measurement errors. Thus, we also develop a two-channel algorithm optimized for the 2ch-HDI to compensate for these differences. Simulation results show that this algorithm can compensate for such differences in the intensity distributions in the two image sensors. Experimental results confirm that the combination of the 2ch-HDI and the calculation algorithm significantly enhances measurement accuracy. PMID:26479799

  3. Compared performance of different centroiding algorithms for high-pass filtered laser guide star Shack-Hartmann wavefront sensors

    NASA Astrophysics Data System (ADS)

    Lardière, Olivier; Conan, Rodolphe; Clare, Richard; Bradley, Colin; Hubin, Norbert

    2010-07-01

    Variations of the sodium layer altitude and atom density profile induce errors on laser-guide-star (LGS) adaptive optics systems. These errors must be mitigated by (i), optimizing the LGS wavefront sensor (WFS) and the centroiding algorithm, and (ii), by adding a high-pass filter on the LGS path and a low-bandwidth natural-guide-star WFS. In the context of the ESO E-ELT project, five centroiding algorithms, namely the centre-of-gravity (CoG), the weighted CoG, the matched filter, the quad-cell and the correlation, have been evaluated in closedloop on the University of Victoria LGS wavefront sensing test bed. Each centroiding algorithm performance is compared for a central versus side-launch laser, different fields of view, pixel sampling, and LGS flux.

  4. Genetic Algorithm for Innovative Device Designs in High-Efficiency III–V Nitride Light-Emitting Diodes

    SciTech Connect

    Zhu, Di; Schubert, Martin F.; Cho, Jaehee; Schubert, E. Fred; Crawford, Mary H.; Koleske, Daniel D.; Shim, Hyunwook; Sone, Cheolsoo

    2012-01-01

    Light-emitting diodes are becoming the next-generation light source because of their prominent benefits in energy efficiency, versatility, and benign environmental impact. However, because of the unique polarization effects in III–V nitrides and the high complexity of light-emitting diodes, further breakthroughs towards truly optimized devices are required. Here we introduce the concept of artificial evolution into the device optimization process. Reproduction and selection are accomplished by means of an advanced genetic algorithm and device simulator, respectively. We demonstrate that this approach can lead to new device structures that go beyond conventional approaches. The innovative designs originating from the genetic algorithm and the demonstration of the predicted results by implementing structures suggested by the algorithm establish a new avenue for complex semiconductor device design and optimization.

  5. Permanent prostate implant using high activity seeds and inverse planning with fast simulated annealing algorithm: A 12-year Canadian experience

    SciTech Connect

    Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca

    2007-02-01

    Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.

  6. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  7. Axle counter for high-speed railway based on fibre Bragg grating sensor and algorithm optimization for peak searching

    NASA Astrophysics Data System (ADS)

    Quan, Yu; He, Dawei; Wang, Yongsheng; Wang, Pengfei

    2014-08-01

    For the benefit of electrical isolation, corrosion resistance and quasi-distributed detecting, Fiber Bragg Grating Sensor has been studied for high-speed railway application progressively. Existing Axle counter system based on fiber Bragg grating sensor isn't appropriate for high-speed railway for the shortcoming of emplacement of fiber Bragg grating sensor, low Sampling rate and un-optimized algorithm for peak searching. We propose a new design for the Axle counter of high-speed railway based on high-speed fiber Bragg grating demodulating system. We also optimized algorithm for peak searching by synthesizing the three sensor data, bringing forward the time axle, Gaussian fitting and Finite Element Analysis. The feasibility was verified by field experiment.

  8. An evaluation of SEBAL algorithm using high resolution aircraft data acquired during BEAREX07

    NASA Astrophysics Data System (ADS)

    Paul, G.; Gowda, P. H.; Prasad, V. P.; Howell, T. A.; Staggenborg, S.

    2010-12-01

    Surface Energy Balance Algorithm for Land (SEBAL) computes spatially distributed surface energy fluxes and evapotranspiration (ET) rates using a combination of empirical and deterministic equations executed in a strictly hierarchical sequence. Over the past decade SEBAL has been tested over various regions and has found its application in solving water resources and irrigation problems. This research combines high resolution remote sensing data and field measurements of the surface radiation and agro-meteorological variables to review various SEBAL steps for mapping ET in the Texas High Plains (THP). High resolution aircraft images (0.5-1.8 m) acquired during the Bushland Evapotranspiration and Agricultural Remote Sensing Experiment 2007 (BEAREX07) conducted at the USDA-ARS Conservation and Production Research Laboratory in Bushland, Texas, was utilized to evaluate the SEBAL. Accuracy of individual relationships and predicted ET were investigated using observed hourly ET rates from 4 large weighing lysimeters, each located at the center of 4.7 ha field. The uniqueness and the strength of this study come from the fact that it evaluates the SEBAL for irrigated and dryland conditions simultaneously with each lysimeter field planted to irrigated forage sorghum, irrigated forage corn, dryland clumped grain sorghum, and dryland row sorghum. Improved coefficients for the local conditions were developed for the computation of roughness length for momentum transport. The decision involved in selection of dry and wet pixels, which essentially determines the partitioning of the available energy between sensible (H) and latent (LE) heat fluxes has been discussed. The difference in roughness length referred to as the kB-1 parameter was modified in the current study. Performance of the SEBAL was evaluated using mean bias error (MBE) and root mean square error (RMSE). An RMSE of ±37.68 W m-2 and ±0.11 mm h-1 was observed for the net radiation and hourly actual ET, respectively

  9. High-order algorithms for compressible reacting flow with complex chemistry

    NASA Astrophysics Data System (ADS)

    Emmett, Matthew; Zhang, Weiqun; Bell, John B.

    2014-05-01

    In this paper we describe a numerical algorithm for integrating the multicomponent, reacting, compressible Navier-Stokes equations, targeted for direct numerical simulation of combustion phenomena. The algorithm addresses two shortcomings of previous methods. First, it incorporates an eighth-order narrow stencil approximation of diffusive terms that reduces the communication compared to existing methods and removes the need to use a filtering algorithm to remove Nyquist frequency oscillations that are not damped with traditional approaches. The methodology also incorporates a multirate temporal integration strategy that provides an efficient mechanism for treating chemical mechanisms that are stiff relative to fluid dynamical time-scales. The overall methodology is eighth order in space with options for fourth order to eighth order in time. The implementation uses a hybrid programming model designed for effective utilisation of many-core architectures. We present numerical results demonstrating the convergence properties of the algorithm with realistic chemical kinetics and illustrating its performance characteristics. We also present a validation example showing that the algorithm matches detailed results obtained with an established low Mach number solver.

  10. High-resolution combined global gravity field modelling: Solving large kite systems using distributed computational algorithms

    NASA Astrophysics Data System (ADS)

    Zingerle, Philipp; Fecher, Thomas; Pail, Roland; Gruber, Thomas

    2016-04-01

    One of the major obstacles in modern global gravity field modelling is the seamless combination of lower degree inhomogeneous gravity field observations (e.g. data from satellite missions) with (very) high degree homogeneous information (e.g. gridded and reduced gravity anomalies, beyond d/o 1000). Actual approaches mostly combine such data only on the basis of the coefficients, meaning that previously for both observation classes (resp. models) a spherical harmonic analysis is done independently, solving dense normal equations (NEQ) for the inhomogeneous model and block-diagonal NEQs for the homogeneous. Obviously those methods are unable to identify or eliminate effects as spectral leakage due to band limitations of the models and non-orthogonality of the spherical harmonic base functions. To antagonize such problems a combination of both models on NEQ-basis is desirable. Theoretically this can be achieved using NEQ-stacking. Because of the higher maximum degree of the homogeneous model a reordering of the coefficient is needed which leads inevitably to the destruction of the block diagonal structure of the appropriate NEQ-matrix and therefore also to the destruction of simple sparsity. Hence, a special coefficient ordering is needed to create some new favorable sparsity pattern leading to a later efficient computational solving method. Such pattern can be found in the so called kite-structure (Bosch, 1993), achieving when applying the kite-ordering to the stacked NEQ-matrix. In a first step it is shown what is needed to attain the kite-(NEQ)system, how to solve it efficiently and also how to calculate the appropriate variance information from it. Further, because of the massive computational workload when operating on large kite-systems (theoretically possible up to about max. d/o 100.000), the main emphasis is put on to the presentation of special distributed algorithms which may solve those systems parallel on an indeterminate number of processes and are

  11. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  12. The specific heat of the 2201 BISCO high-T c superconductor

    NASA Astrophysics Data System (ADS)

    Yu, M. K.; Franck, J. P.

    1994-04-01

    The specific heat of two samples of the single-plane 2201 bismuth superconductor was measured. No linear term in Cp was observed at low temperatures. The lattice molar specific heat below 14 K exceeds that of the 2221 and 2223 bismuth superconductors considerably. As a consequence no peak in Cp/ T3 is observed in this superconductor, in contrast to other high- Tc cuprates. The specific-heat anomaly near Tc could not be resolved.

  13. Towards material-specific simulations of high-temperature superconducting cuprates

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas

    2006-03-01

    Simulations of high-temperature superconducting (HTSC) cuprates have typically fallen into two categories: (1) studies of generic models such as the two-dimensional (2D) Hubbard model, that are believed to capture the essential physics necessary to describe the superconducting state, and, (2) first principles electronic structure calculations that are based on the local density approximation (LDA) to density functional theory (DFT) and lead to materials specific models. With advent of massibely parallel vector supercomputers, such as the Cray X1E at ORNL, and cluster algorithms such as the Dynamical Cluster Approximation (DCA), it is now possible to systematically solve the 2D Hubbard model with Quantum Monte Carol (QMC) simulations and to establish that the model indeed describes d-wave superconductivity [1]. Furthermore, studies of a multi-band model with input parameters generated from LDA calculations demonstrate that the existence of a superconducting transition is very sensitive to the underlying band structure [2]. Application of the LDA to transition metal oxides is, however, hampered by spurious self-interactions that particularly affects localized orbitals. Here we apply the self-interaction corrected local spin-density method (SIC-LSD) to describe the electronic structure of the cuprates. It was recently applied with success to generate input parameters for simple models of Mn doped III-V semiconductors [3] and is known to properly describe the antiferromagnetic insulating ground state of the parent compounds of the HTSC cuprates. We will discus the models for HTSC cuprates derived from the SIC-LSD study and how the differences to the well-known LDA results impact the QMC-DCA simulations of the magnetic and superconducting properties. [1] T. A. Maier, M. Jarrell, T. C. Schulthess, P. R. C. Kent, and J. B. White, Phys. Rev. Lett. 95, 237001 (2005). [2] P. Kent, A. Macridin, M. Jarrell, T. Schulthess, O. Andersen, T. Dasgupta, and O. Jepsen, Bulletin of

  14. Patient-specific dose calculation methods for high-dose-rate iridium-192 brachytherapy

    NASA Astrophysics Data System (ADS)

    Poon, Emily S.

    In high-dose-rate 192Ir brachytherapy, the radiation dose received by the patient is calculated according to the AAPM Task Group 43 (TG-43) formalism. This table-based dose superposition method uses dosimetry parameters derived with the radioactive 192Ir source centered in a water phantom. It neglects the dose perturbations caused by inhomogeneities, such as the patient anatomy, applicators, shielding, and radiographic contrast solution. In this work, we evaluated the dosimetric characteristics of a shielded rectal applicator with an endocavitary balloon injected with contrast solution. The dose distributions around this applicator were calculated by the GEANT4 Monte Carlo (MC) code and measured by ionization chamber and GAFCHROMIC EBT film. A patient-specific dose calculation study was then carried out for 40 rectal treatment plans. The PTRAN_CT MC code was used to calculate the dose based on computed tomography (CT) images. This study involved the development of BrachyGUI, an integrated treatment planning tool that can process DICOM-RT data and create PTRAN_CT input initialization files. BrachyGUI also comes with dose calculation and evaluation capabilities. We proposed a novel scatter correction method to account for the reduction in backscatter radiation near tissue-air interfaces. The first step requires calculating the doses contributed by primary and scattered photons separately, assuming a full scatter environment. The scatter dose in the patient is subsequently adjusted using a factor derived by MC calculations, which depends on the distances between the point of interest, the 192Ir source, and the body contour. The method was validated for multicatheter breast brachytherapy, in which the target and skin doses for 18 patient plans agreed with PTRAN_CT calculations better than 1%. Finally, we developed a CT-based analytical dose calculation method. It corrects for the photon attenuation and scatter based upon the radiological paths determined by ray tracing

  15. High frequency, high temperature specific core loss and dynamic B-H hysteresis loop characteristics of soft magnetic alloys

    SciTech Connect

    Wieserman, W.R.; Schwarze, G.E.; Niedra, J.M.

    1994-09-01

    Limited experimental data exists for the specific core loss and dynamic B-H loops for soft magnetic materials for the combined conditions of high frequency and high temperature. This experimental study investigates the specific core loss and dynamic B-H loop characteristics of Supermalloy and Metglass 2605SC over the frequency range of 1-50 kHz and temperature range of 23-300 C under sinusoidal voltage excitation. The experimental setup used to conduct the investigation is described. The effects of the maximum magnetic flux density, frequency, and temperature on the specific core loss and on the size and shape of the B-H loops are examined.

  16. High frequency, high temperature specific core loss and dynamic B-H hysteresis loop characteristics of soft magnetic alloys

    NASA Technical Reports Server (NTRS)

    Wieserman, W. R.; Schwarze, G. E.; Niedra, J. M.

    1990-01-01

    Limited experimental data exists for the specific core loss and dynamic B-H loops for soft magnetic materials for the combined conditions of high frequency and high temperature. This experimental study investigates the specific core loss and dynamic B-H loop characteristics of Supermalloy and Metglas 2605SC over the frequency range of 1 to 50 kHz and temperature range of 23 to 300 C under sinusoidal voltage excitation. The experimental setup used to conduct the investigation is described. The effects of the maximum magnetic flux density, frequency, and temperature on the specific core loss and on the size and shape of the B-H loops are examined.

  17. High frequency, high temperature specific core loss and dynamic B-H hysteresis loop characteristics of soft magnetic alloys

    NASA Technical Reports Server (NTRS)

    Wieserman, W. R.; Schwarze, G. E.; Niedra, J. M.

    1990-01-01

    Limited experimental data exists for the specific core loss and dynamic B-H loop for soft magnetic materials for the combined conditions of high frequency and high temperature. This experimental study investigates the specific core loss and dynamic B-H loop characteristics of Supermalloy and Metglas 2605SC over the frequency range of 1 to 50 kHz and temperature range of 23 to 300 C under sinusoidal voltage excitation. The experimental setup used to conduct the investigation is described. The effects of the maximum magnetic flux density, frequency, and temperature on the specific core loss and on the size and shape of the B-H loops are examined.

  18. A new algorithm for a high-modulation frequency and high-speed digital lock-in amplifier

    NASA Astrophysics Data System (ADS)

    Jiang, G. L.; Yang, H.; Li, R.; Kong, P.

    2016-01-01

    To increase the maximum modulation frequency of the digital lock-in amplifier in an online system, we propose a new algorithm using a square wave reference whose frequency is an odd sub-multiple of the modulation frequency, which is based on odd harmonic components in the square wave reference. The sampling frequency is four times the modulation frequency to insure the orthogonality of reference sequences. Only additions and subtractions are used to implement phase-sensitive detection, which speeds up the computation in lock-in. Furthermore, the maximum modulation frequency of a lock-in is enhanced considerably. The feasibility of this new algorithm is tested by simulation and experiments.

  19. Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.

    2015-07-01

    The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.

  20. GPU-based ray tracing algorithm for high-speed propagation prediction in typical indoor environments

    NASA Astrophysics Data System (ADS)

    Guo, Lixin; Guan, Xiaowei; Liu, Zhongyu

    2015-10-01

    A fast 3-D ray tracing propagation prediction model based on virtual source tree is presented in this paper, whose theoretical foundations are geometrical optics(GO) and the uniform theory of diffraction(UTD). In terms of typical single room indoor scene, taking the geometrical and electromagnetic information into account, some acceleration techniques are adopted to raise the efficiency of the ray tracing algorithm. The simulation results indicate that the runtime of the ray tracing algorithm will sharply increase when the number of the objects in the single room is large enough. Therefore, GPU acceleration technology is used to solve that problem. As is known to all, GPU is good at calculation operation rather than logical judgment, so that tens of thousands of threads in CUDA programs are able to calculate at the same time, in order to achieve massively parallel acceleration. Finally, a typical single room with several objects is simulated by using the serial ray tracing algorithm and the parallel one respectively. It can be found easily from the results that compared with the serial algorithm, the GPU-based one can achieve greater efficiency.

  1. An Evaluation of SEBAL Algorithm Using High Resolution Aircraft Data Acquired During BEAREX07

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Surface Energy Balance Algorithm for Land (SEBAL) computes spatially distributed surface energy fluxes and evapotranspiration (ET) rates using a combination of empirical and deterministic equations executed in a strictly hierarchical sequence. Over the past decade, SEBAL has been tested over various...

  2. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net . PMID:20426693

  3. Multiple expression of molecular information: enforced generation of different supramolecular inorganic architectures by processing of the same ligand information through specific coordination algorithms

    PubMed

    Funeriu; Lehn; Fromm; Fenske

    2000-06-16

    The multisubunit ligand 2 combines two complexation substructures known to undergo, with specific metal ions, distinct self-assembly processes to form a double-helical and a grid-type structure, respectively. The binding information contained in this molecular strand may be expected to generate, in a strictly predetermined and univocal fashion, two different, well-defined output inorganic architectures depending on the set of metal ions, that is, on the coordination algorithm used. Indeed, as predicted, the self-assembly of 2 with eight CuII and four CuI yields the intertwined structure D1. It results from a crossover of the two assembly subprograms and has been fully characterized by crystal structure determination. On the other hand, when the instructions of strand 2 are read out with a set of eight CuI and four MII (M = Fe, Co, Ni, Cu) ions, the architectures C1-C4, resulting from a linear combination of the two subprograms, are obtained, as indicated by the available physico-chemical and spectral data. Redox interconversion of D1 and C4 has been achieved. These results indicate that the same molecular information may yield different output structures depending on how it is processed, that is, depending on the interactional (coordination) algorithm used to read it. They have wide implications for the design and implementation of programmed chemical systems, pointing towards multiprocessing capacity, in a one code/ several outputs scheme, of potential significance for molecular computation processes and possibly even with respect to information processing in biology. PMID:10926214

  4. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  5. Advanced algorithms and high-performance testbed for large-scale site characterization and subsurface target detection using airborne ground-penetrating SAR

    NASA Astrophysics Data System (ADS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1999-08-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, JPL, Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field, in Colorado, by using SRI airborne, ground penetrating, SAR. The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance testbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and maximum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accuracy UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data. In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma Proving Ground, AZ, acquired by SRI SAR.

  6. Vision Algorithm for the Solar Aspect System of the High Energy Replicated Optics to Explore the Sun Mission

    NASA Technical Reports Server (NTRS)

    Cramer, Alexander Krishnan

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high- accuracy pitch and yaw pointing solutions relative to the sun on a high altitude balloon. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small cross-shaped fiducial markers. Images of this plate taken with an off-the-shelf camera were processed to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, and identification with an ad-hoc method based on the spacing between fiducials. Performance is verified on real test data where possible, but otherwise uses artificially generated data. Pointing knowledge is ultimately verified to meet the 20 arcsecond requirement.

  7. Hardware acceleration of lucky-region fusion (LRF) algorithm for high-performance real-time video processing

    NASA Astrophysics Data System (ADS)

    Browning, Tyler; Jackson, Christopher; Cayci, Furkan; Carhart, Gary W.; Liu, J. J.; Kiamilev, Fouad

    2015-06-01

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames from fast, high-resolution image sensors, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on CPU and field programmable gate array (FPGA) platforms. The CPU did not have sufficient processing power to handle real-time processing of video. Last year, we presented a real-time LRF implementation using an FPGA. However, due to the slow register-transfer level (RTL) development and simulation time, it was difficult to adjust and discover optimal LRF settings such as Gaussian kernel radius and synthetic frame buffer size. To overcome this limitation, we implemented the LRF algorithm on an off-the-shelf graphical processing unit (GPU) in order to take advantage of built-in parallelization and significantly faster development time. Our initial results show that the unoptimized GPU implementation has almost comparable turbulence mitigation to the FPGA version. In our presentation, we will explore optimization of the LRF algorithm on the GPU to achieve higher performance results, and adding new performance capabilities such as image stabilization.

  8. Applicability of data mining algorithms in the identification of beach features/patterns on high-resolution satellite data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.

    2015-01-01

    The available beach classification algorithms and sediment budget models are mainly based on in situ parameters, usually unavailable for several coastal areas. A morphological analysis using remotely sensed data is a valid alternative. This study focuses on the application of data mining techniques, particularly decision trees (DTs) and artificial neural networks (ANNs) to an IKONOS-2 image in order to identify beach features/patterns in a stretch of the northwest coast of Portugal. Based on knowledge of the coastal features, five classes were defined. In the identification of beach features/patterns, the ANN algorithm presented an overall accuracy of 98.6% and a kappa coefficient of 0.97. The best DTs algorithm (with pruning) presents an overall accuracy of 98.2% and a kappa coefficient of 0.97. The results obtained through the ANN and DTs were in agreement. However, the ANN presented a classification more sensitive to rip currents. The use of ANNs and DTs for beach classification from remotely sensed data resulted in an increased classification accuracy when compared with traditional classification methods. The association of remotely sensed high-spatial resolution data and data mining algorithms is an effective methodology with which to identify beach features/patterns.

  9. The impact of low-Z and high-Z metal implants in IMRT: A Monte Carlo study of dose inaccuracies in commercial dose algorithms

    SciTech Connect

    Spadea, Maria Francesca; Verburg, Joost Mathias; Seco, Joao; Baroni, Guido

    2014-01-15

    Purpose: The aim of the study was to evaluate the dosimetric impact of low-Z and high-Z metallic implants on IMRT plans. Methods: Computed tomography (CT) scans of three patients were analyzed to study effects due to the presence of Titanium (low-Z), Platinum and Gold (high-Z) inserts. To eliminate artifacts in CT images, a sinogram-based metal artifact reduction algorithm was applied. IMRT dose calculations were performed on both the uncorrected and corrected images using a commercial planning system (convolution/superposition algorithm) and an in-house Monte Carlo platform. Dose differences between uncorrected and corrected datasets were computed and analyzed using gamma index (Pγ{sub <1}) and setting 2 mm and 2% as distance to agreement and dose difference criteria, respectively. Beam specific depth dose profiles across the metal were also examined. Results: Dose discrepancies between corrected and uncorrected datasets were not significant for low-Z material. High-Z materials caused under-dosage of 20%–25% in the region surrounding the metal and over dosage of 10%–15% downstream of the hardware. Gamma index test yielded Pγ{sub <1}>99% for all low-Z cases; while for high-Z cases it returned 91% < Pγ{sub <1}< 99%. Analysis of the depth dose curve of a single beam for low-Z cases revealed that, although the dose attenuation is altered inside the metal, it does not differ downstream of the insert. However, for high-Z metal implants the dose is increased up to 10%–12% around the insert. In addition, Monte Carlo method was more sensitive to the presence of metal inserts than superposition/convolution algorithm. Conclusions: The reduction in terms of dose of metal artifacts in CT images is relevant for high-Z implants. In this case, dose distribution should be calculated using Monte Carlo algorithms, given their superior accuracy in dose modeling in and around the metal. In addition, the knowledge of the composition of metal inserts improves the accuracy of

  10. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO) is a NASA spacecraft designed to study the Sun. It was launched on February 11, 2010 into a geosynchronous orbit, and uses a suite of attitude sensors and actuators to finely point the spacecraft at the Sun. SDO has three science instruments: the Atmospheric Imaging Assembly (AIA), the Helioseismic and Magnetic Imager (HMI), and the Extreme Ultraviolet Variability Experiment (EVE). SDO uses two High Gain Antennas (HGAs) to send science data to a dedicated ground station in White Sands, New Mexico. In order to meet the science data capture budget, the HGAs must be able to transmit data to the ground for a very large percentage of the time. Each HGA is a dual-axis antenna driven by stepper motors. Both antennas transmit data at all times, but only a single antenna is required in order to meet the transmission rate requirement. For portions of the year, one antenna or the other has an unobstructed view of the White Sands ground station. During other periods, however, the view from both antennas to the Earth is blocked for different portions of the day. During these times of blockage, the two HGAs take turns pointing to White Sands, with the other antenna pointing out to space. The HGAs handover White Sands transmission responsibilities to the unblocked antenna. There are two handover seasons per year, each lasting about 72 days, where the antennas hand off control every twelve hours. The non-tracking antenna slews back to the ground station by following a ground commanded trajectory and arrives approximately 5 minutes before the formerly tracking antenna slews away to point out into space. The SDO Attitude Control System (ACS) runs at 5 Hz, and the HGA Gimbal Control Electronics (GCE) run at 200 Hz. There are 40 opportunities for the gimbals to step each ACS cycle, with a hardware limitation of no more than one step every three GCE cycles. The ACS calculates the desired gimbal motion for tracking the ground station or for slewing

  11. A low-jitter and high-throughput scheduling based on genetic algorithm in slotted WDM networks

    NASA Astrophysics Data System (ADS)

    Zhang, Jingjing; Jin, Yaohui; Su, Yikai; Xu, Buwei; Zhang, Chunlei; Zhu, Yi; Hu, Weisheng

    2005-02-01

    Slotted WDM, which achieves higher capacity compared with conventional WDM and SDH networks, has been discussed a lot recently. The ring network for this architecture has been demonstrated experimentally. In slotted WDM ring network, each node is equipped with a wavelength-tunable transmitter and a fixed receiver and assigned with a specific wavelength. A node can send data to every other node by tuning wavelength accordingly in a time slot. One of the important issues for it is scheduling. Scheduling of it can be reduced to input queued switch when synchronization and propagation are solved and many schemes have been proposed to solve these two issues. However, it"s proved that scheduling of such a network taking both jitter and throughput into consideration is NP hard. Greedy algorithm has been proposed to solve it before. The main contribution of this paper lies in a novel genetic algorithm to obtain optimal or near optimal value of this specific NP hard problem. We devise problem specific chromosome codes, fitness function, crossover and mutation operations. Experimental results show that our GA provides better performances in terms of throughput and jitter than a greedy heuristic.

  12. Influence of measuring algorithm on shape accuracy in the compensating turning of high gradient thin-wall parts

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Wang, Guilin; Zhu, Dengchao; Li, Shengyi

    2015-02-01

    In order to meet the requirement of aerodynamics, the infrared domes or windows with conformal and thin-wall structure becomes the development trend of high-speed aircrafts in the future. But these parts usually have low stiffness, the cutting force will change along with the axial position, and it is very difficult to meet the requirement of shape accuracy by single machining. Therefore, on-machine measurement and compensating turning are used to control the shape errors caused by the fluctuation of cutting force and the change of stiffness. In this paper, on the basis of ultra precision diamond lathe, a contact measuring system with five DOFs is developed to achieve on-machine measurement of conformal thin-wall parts with high accuracy. According to high gradient surface, the optimizing algorithm is designed on the distribution of measuring points by using the data screening method. The influence rule of sampling frequency is analyzed on measuring errors, the best sampling frequency is found out based on planning algorithm, the effect of environmental factors and the fitting errors are controlled within lower range, and the measuring accuracy of conformal dome is greatly improved in the process of on-machine measurement. According to MgF2 conformal dome with high gradient, the compensating turning is implemented by using the designed on-machine measuring algorithm. The shape error is less than PV 0.8μm, greatly superior compared with PV 3μm before compensating turning, which verifies the correctness of measuring algorithm.

  13. Highly Specific, Bi-substrate-Competitive Src Inhibitors from DNA-Templated Macrocycles

    PubMed Central

    Georghiou, George; Kleiner, Ralph E.; Pulkoski-Gross, Michael

    2011-01-01

    Protein kinases are attractive therapeutic targets, but their high sequence and structural conservation complicates the development of specific inhibitors. We recently discovered from a DNA-templated macrocycle library inhibitors with unusually high selectivity among Src-family kinases. Starting from these compounds, we developed and characterized in molecular detail potent macrocyclic inhibitors of Src kinase and its cancer-associated gatekeeper mutant. We solved two co-crystal structures of macrocycles bound to Src kinase. These structures reveal the molecular basis of the combined ATP- and substrate peptide-competitive inhibitory mechanism and the remarkable kinase specificity of the compounds. The most potent compounds inhibit Src activity in cultured mammalian cells. Our work establishes that macrocycles can inhibit protein kinases through a bi-substrate competitive mechanism with high potency and exceptional specificity, reveals the precise molecular basis for their desirable properties, and provides new insights into the development of Src-specific inhibitors with potential therapeutic relevance. PMID:22344177

  14. Engineering of bacterial exotoxins for highly efficient and receptor-specific intracellular delivery of diverse cargos.

    PubMed

    Ryou, Jeong-Hyun; Sohn, Yoo-Kyoung; Hwang, Da-Eun; Park, Woo-Yong; Kim, Nury; Heo, Won-Do; Kim, Mi-Young; Kim, Hak-Sung

    2016-08-01

    The intracellular delivery of proteins with high efficiency in a receptor-specific manner is of great significance in molecular medicine and biotechnology, but remains a challenge. Herein, we present the development of a highly efficient and receptor-specific delivery platform for protein cargos by combining the receptor binding domain of Escherichia coli Shiga-like toxin and the translocation domain of Pseudomonas aeruginosa exotoxin A. We demonstrated the utility and efficiency of the delivery platform by showing a cytosolic delivery of diverse proteins both in vitro and in vivo in a receptor-specific manner. In particular, the delivery system was shown to be effective for targeting an intracellular protein and consequently suppressing the tumor growth in xenograft mice. The present platform can be widely used for intracellular delivery of diverse functional macromolecules with high efficiency in a receptor-specific manner. Biotechnol. Bioeng. 2016;113: 1639-1646. © 2016 Wiley Periodicals, Inc. PMID:26773973

  15. Establishment of an Algorithm Using prM/E- and NS1-Specific IgM Antibody-Capture Enzyme-Linked Immunosorbent Assays in Diagnosis of Japanese Encephalitis Virus and West Nile Virus Infections in Humans.

    PubMed

    Galula, Jedhan U; Chang, Gwong-Jen J; Chuang, Shih-Te; Chao, Day-Yu

    2016-02-01

    The front-line assay for the presumptive serodiagnosis of acute Japanese encephalitis virus (JEV) and West Nile virus (WNV) infections is the premembrane/envelope (prM/E)-specific IgM antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Due to antibody cross-reactivity, MAC-ELISA-positive samples may be confirmed with a time-consuming plaque reduction neutralization test (PRNT). In the present study, we applied a previously developed anti-nonstructural protein 1 (NS1)-specific MAC-ELISA (NS1-MAC-ELISA) on archived acute-phase serum specimens from patients with confirmed JEV and WNV infections and compared the results with prM/E containing virus-like particle-specific MAC-ELISA (VLP-MAC-ELISA). Paired-receiver operating characteristic (ROC) curve analyses revealed no statistical differences in the overall assay performances of the VLP- and NS1-MAC-ELISAs. The two methods had high sensitivities of 100% but slightly lower specificities that ranged between 80% and 100%. When the NS1-MAC-ELISA was used to confirm positive results in the VLP-MAC-ELISA, the specificity of serodiagnosis, especially for JEV infection, was increased to 90% when applied in areas where JEV cocirculates with WNV, or to 100% when applied in areas that were endemic for JEV. The results also showed that using multiple antigens could resolve the cross-reactivity in the assays. Significantly higher positive-to-negative (P/N) values were consistently obtained with the homologous antigens than those with the heterologous antigens. JEV or WNV was reliably identified as the currently infecting flavivirus by a higher ratio of JEV-to-WNV P/N values or vice versa. In summary of the above-described results, the diagnostic algorithm combining the use of multiantigen VLP- and NS1-MAC-ELISAs was developed and can be practically applied to obtain a more specific and reliable result for the serodiagnosis of JEV and WNV infections without the need for PRNT. The developed algorithm should provide great

  16. Establishment of an Algorithm Using prM/E- and NS1-Specific IgM Antibody-Capture Enzyme-Linked Immunosorbent Assays in Diagnosis of Japanese Encephalitis Virus and West Nile Virus Infections in Humans

    PubMed Central

    Galula, Jedhan U.; Chang, Gwong-Jen J.

    2015-01-01

    The front-line assay for the presumptive serodiagnosis of acute Japanese encephalitis virus (JEV) and West Nile virus (WNV) infections is the premembrane/envelope (prM/E)-specific IgM antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Due to antibody cross-reactivity, MAC-ELISA-positive samples may be confirmed with a time-consuming plaque reduction neutralization test (PRNT). In the present study, we applied a previously developed anti-nonstructural protein 1 (NS1)-specific MAC-ELISA (NS1-MAC-ELISA) on archived acute-phase serum specimens from patients with confirmed JEV and WNV infections and compared the results with prM/E containing virus-like particle-specific MAC-ELISA (VLP-MAC-ELISA). Paired-receiver operating characteristic (ROC) curve analyses revealed no statistical differences in the overall assay performances of the VLP- and NS1-MAC-ELISAs. The two methods had high sensitivities of 100% but slightly lower specificities that ranged between 80% and 100%. When the NS1-MAC-ELISA was used to confirm positive results in the VLP-MAC-ELISA, the specificity of serodiagnosis, especially for JEV infection, was increased to 90% when applied in areas where JEV cocirculates with WNV, or to 100% when applied in areas that were endemic for JEV. The results also showed that using multiple antigens could resolve the cross-reactivity in the assays. Significantly higher positive-to-negative (P/N) values were consistently obtained with the homologous antigens than those with the heterologous antigens. JEV or WNV was reliably identified as the currently infecting flavivirus by a higher ratio of JEV-to-WNV P/N values or vice versa. In summary of the above-described results, the diagnostic algorithm combining the use of multiantigen VLP- and NS1-MAC-ELISAs was developed and can be practically applied to obtain a more specific and reliable result for the serodiagnosis of JEV and WNV infections without the need for PRNT. The developed algorithm should provide great

  17. Study of high speed complex number algorithms. [for determining antenna for field radiation patterns

    NASA Technical Reports Server (NTRS)

    Heisler, R.

    1981-01-01

    A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.

  18. New Design Methods And Algorithms For High Energy-Efficient And Low-cost Distillation Processes

    SciTech Connect

    Agrawal, Rakesh

    2013-11-21

    This project sought and successfully answered two big challenges facing the creation of low-energy, cost-effective, zeotropic multi-component distillation processes: first, identification of an efficient search space that includes all the useful distillation configurations and no undesired configurations; second, development of an algorithm to search the space efficiently and generate an array of low-energy options for industrial multi-component mixtures. Such mixtures are found in large-scale chemical and petroleum plants. Commercialization of our results was addressed by building a user interface allowing practical application of our methods for industrial problems by anyone with basic knowledge of distillation for a given problem. We also provided our algorithm to a major U.S. Chemical Company for use by the practitioners. The successful execution of this program has provided methods and algorithms at the disposal of process engineers to readily generate low-energy solutions for a large class of multicomponent distillation problems in a typical chemical and petrochemical plant. In a petrochemical complex, the distillation trains within crude oil processing, hydrotreating units containing alkylation, isomerization, reformer, LPG (liquefied petroleum gas) and NGL (natural gas liquids) processing units can benefit from our results. Effluents from naphtha crackers and ethane-propane crackers typically contain mixtures of methane, ethylene, ethane, propylene, propane, butane and heavier hydrocarbons. We have shown that our systematic search method with a more complete search space, along with the optimization algorithm, has a potential to yield low-energy distillation configurations for all such applications with energy savings up to 50%.

  19. New Detection Systems of Bacteria Using Highly Selective Media Designed by SMART: Selective Medium-Design Algorithm Restricted by Two Constraints

    PubMed Central

    Kawanishi, Takeshi; Shiraishi, Takuya; Okano, Yukari; Sugawara, Kyoko; Hashimoto, Masayoshi; Maejima, Kensaku; Komatsu, Ken; Kakizawa, Shigeyuki; Yamaji, Yasuyuki; Hamamoto, Hiroshi; Oshima, Kenro; Namba, Shigetou

    2011-01-01

    Culturing is an indispensable technique in microbiological research, and culturing with selective media has played a crucial role in the detection of pathogenic microorganisms and the isolation of commercially useful microorganisms from environmental samples. Although numerous selective media have been developed in empirical studies, unintended microorganisms often grow on such media probably due to the enormous numbers of microorganisms in the environment. Here, we present a novel strategy for designing highly selective media based on two selective agents, a carbon source and antimicrobials. We named our strategy SMART for highly Selective Medium-design Algorithm Restricted by Two constraints. To test whether the SMART method is applicable to a wide range of microorganisms, we developed selective media for Burkholderia glumae, Acidovorax avenae, Pectobacterium carotovorum, Ralstonia solanacearum, and Xanthomonas campestris. The series of media developed by SMART specifically allowed growth of the targeted bacteria. Because these selective media exhibited high specificity for growth of the target bacteria compared to established selective media, we applied three notable detection technologies: paper-based, flow cytometry-based, and color change-based detection systems for target bacteria species. SMART facilitates not only the development of novel techniques for detecting specific bacteria, but also our understanding of the ecology and epidemiology of the targeted bacteria. PMID:21304596

  20. An Autonomous Navigation Algorithm for High Orbit Satellite Using Star Sensor and Ultraviolet Earth Sensor

    PubMed Central

    Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu

    2013-01-01

    An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust. PMID:24250261

  1. [The High Precision Analysis Research of Multichannel BOTDR Scattering Spectral Information Based on the TTDF and CNS Algorithm].

    PubMed

    Zhang, Yan-jun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong

    2015-07-01

    Traditional BOTDR optical fiber sensing system uses single channel sensing fiber to measure the information features. Uncontrolled factors such as cross-sensitivity can lead to a lower scattering spectrum fitting precision and make the information analysis deflection get worse. Therefore, a BOTDR system for detecting the multichannel sensor information at the same time is proposed. Also it provides a scattering spectrum analysis method for multichannel Brillouin optical time-domain reflection (BOT-DR) sensing system in order to extract high precision spectrum feature. This method combines the three times data fusion (TTDF) and the cuckoo Newton search (CNS) algorithm. First, according to the rule of Dixon and Grubbs criteria, the method uses the ability of TTDF algorithm in data fusion to eliminate the influence of abnormal value and reduce the error signal. Second, it uses the Cuckoo Newton search algorithm to improve the spectrum fitting and enhance the accuracy of Brillouin scattering spectrum information analysis. We can obtain the global optimal solution by smart cuckoo search. By using the optimal solution as the initial value of Newton algorithm for local optimization, it can ensure the spectrum fitting precision. The information extraction at different linewidths is analyzed in temperature information scattering spectrum under the condition of linear weight ratio of 1:9. The variances of the multichannel data fusion is about 0.0030, the center frequency of scattering spectrum is 11.213 GHz and the temperature error is less than 0.15 K. Theoretical analysis and simulation results show that the algorithm can be used in multichannel distributed optical fiber sensing system based on Brillouin optical time domain reflection. It can improve the accuracy of multichannel sensing signals and the precision of Brillouin scattering spectrum analysis effectively. PMID:26717729

  2. Prospective validation of a 1-hour algorithm to rule-out and rule-in acute myocardial infarction using a high-sensitivity cardiac troponin T assay

    PubMed Central

    Reichlin, Tobias; Twerenbold, Raphael; Wildi, Karin; Gimenez, Maria Rubini; Bergsma, Nathalie; Haaf, Philip; Druey, Sophie; Puelacher, Christian; Moehring, Berit; Freese, Michael; Stelzig, Claudia; Krivoshei, Lian; Hillinger, Petra; Jäger, Cedric; Herrmann, Thomas; Kreutzinger, Philip; Radosavac, Milos; Weidmann, Zoraida Moreno; Pershyna, Kateryna; Honegger, Ursina; Wagener, Max; Vuillomenet, Thierry; Campodarve, Isabel; Bingisser, Roland; Miró, Òscar; Rentsch, Katharina; Bassetti, Stefano; Osswald, Stefan; Mueller, Christian

    2015-01-01

    Background: We aimed to prospectively validate a novel 1-hour algorithm using high-sensitivity cardiac troponin T measurement for early rule-out and rule-in of acute myocardial infarction (MI). Methods: In a multicentre study, we enrolled 1320 patients presenting to the emergency department with suspected acute MI. The high-sensitivity cardiac troponin T 1-hour algorithm, incorporating baseline values as well as absolute changes within the first hour, was validated against the final diagnosis. The final diagnosis was then adjudicated by 2 independent cardiologists using all available information, including coronary angiography, echocardiography, follow-up data and serial measurements of high-sensitivity cardiac troponin T levels. Results: Acute MI was the final diagnosis in 17.3% of patients. With application of the high-sensitivity cardiac troponin T 1-hour algorithm, 786 (59.5%) patients were classified as “rule-out,” 216 (16.4%) were classified as “rule-in” and 318 (24.1%) were classified to the “observational zone.” The sensitivity and the negative predictive value for acute MI in the rule-out zone were 99.6% (95% confidence interval [CI] 97.6%–99.9%) and 99.9% (95% CI 99.3%–100%), respectively. The specificity and the positive predictive value for acute MI in the rule-in zone were 95.7% (95% CI 94.3%–96.8%) and 78.2% (95% CI 72.1%–83.6%), respectively. The 1-hour algorithm provided higher negative and positive predictive values than the standard interpretation of highsensitivity cardiac troponin T using a single cut-off level (both p < 0.05). Cumulative 30-day mortality was 0.0%, 1.6% and 1.9% in patients classified in the rule-out, observational and rule-in groups, respectively (p = 0.001). Interpretation: This rapid strategy incorporating high-sensitivity cardiac troponin T baseline values and absolute changes within the first hour substantially accelerated the management of suspected acute MI by allowing safe rule-out as well as accurate

  3. A snowfall detection algorithm over land utilizing high-frequency passive microwave measurements—Application to ATMS

    NASA Astrophysics Data System (ADS)

    Kongoli, Cezar; Meng, Huan; Dong, Jun; Ferraro, Ralph

    2015-03-01

    This paper presents a snowfall detection algorithm over land from high-frequency passive microwave measurements. The algorithm computes the probability of snowfall using logistic regression and the principal components of the seven high-frequency brightness temperature measurements at Atmospheric Technology Microwave Sounder (ATMS) channel frequencies 89 GHz and above. The oxygen absorption channel 6 (53.6 GHz) is utilized as temperature proxy to define the snowfall retrieval domain. Ground truth surface meteorological data including snowfall occurrence were collected over Conterminous U.S. and Alaska during two winter seasons in 2012-2013 and 2013-2014. Statistical analysis of the in situ data matched with ATMS measurements showed that in relatively warmer weather, snowfall tends to be associated with lower high-frequency brightness temperatures than no snowfall, and the brightness temperatures are negatively correlated with measured snowfall rate. In colder weather conditions, however, snowfall tends to occur at higher microwave brightness temperatures than no-snowfall, and the brightness temperatures are positively correlated with snowfall rate. The brightness temperature decrease and the negative correlations with snowfall rate in warmer weather are attributed to the scattering effect. It is hypothesized that the scattering effect is insignificant in colder weather due to the predominance of lighter snowfall and emission. Based on these results, a two-step algorithm is developed that optimizes snowfall detection over these two distinct temperature regimes. Evaluation of the algorithm shows skill in capturing snowfall in variable weather conditions as well as the remaining challenges in the retrieval of lighter and colder snowfall.

  4. Tuneable ultra high specific surface area Mg/Al-CO3 layered double hydroxides.

    PubMed

    Chen, Chunping; Wangriya, Aunchana; Buffet, Jean-Charles; O'Hare, Dermot

    2015-10-01

    We report the synthesis of tuneable ultra high specific surface area Aqueous Miscible Organic solvent-Layered Double Hydroxides (AMO-LDHs). We have investigated the effects of different solvent dispersion volumes, dispersion times and the number of re-dispersion cycles specific surface area of AMO-LDHs. In particular, the effects of acetone dispersion on two different morphology AMO-LDHs (Mg3Al-CO3 AMO-LDH flowers and Mg3Al-CO3 AMO-LDH plates) was investigated. It was found that the amount of acetone used in the dispersion step process can significantly affect the specific surface area of Mg3Al-CO3 AMO-LDH flowers while the dispersion time in acetone is critical factor to obtain high specific surface area Mg3Al-CO3 AMO-LDH plates. Optimisation of the acetone washing steps enables Mg3Al-CO3 AMO-LDH to have high specific surface area up to 365 m(2) g(-1) for LDH flowers and 263 m(2) g(-1) for LDH plates. In addition, spray drying was found to be an effective and practical drying method to increase the specific surface area by a factor of 1.75. Our findings now form the basis of an effective general strategy to obtain ultrahigh specific surface area LDHs. PMID:26308729

  5. Algorithm-based high-speed video analysis yields new insights into Strombolian eruptions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Taddeucci, Jacopo; Moroni, Monica; Scarlato, Piergiorgio

    2014-05-01

    Strombolian eruptions are characterized by mild, frequent explosions that eject gas and ash- to bomb-sized pyroclasts into the atmosphere. The observation of the products of the explosion is crucial, both for direct hazard assessment and for understanding eruption dynamics. Conventional thermal and optical imaging allows a first characterization of several eruptive processes, but the use of high speed cameras, with frame rates of 500 Hz or more, allows to follow the particles on multiples frames, and to reconstruct their trajectories. However, the manual processing of the images is time consuming. Consequently, it does not allow neither the routine monitoring nor averaged statistics, since only relatively few, selected particles (usually the fastest) can be taken into account. In addition, manual processing is quite inefficient to compute the total ejected mass, since it requires to count each individual particle. In this presentation, we discuss the advantages of using numerical methods for the tracking of the particles and the description of the explosion. A toolbox called "Pyroclast Tracking Velocimetry" is used to compute the size and the trajectory of each individual particle. A large variety of parameters can be derived and statistically compared: ejection velocity, ejection angle, deceleration, size, mass, etc. At the scale of the explosion, the total mass, the mean velocity of the particles, the number and the frequency of ejection pulses can be estimated. The study of high speed videos from 2 vents from Yasur volcano (Vanuatu) and 4 from Stromboli volcano (Italy) reveals that these parameters are positively correlated. As a consequence, the intensity of an explosion can be quantitatively, and operator-independently described by the total kinetic energy of the bombs, taking into account both the mass and the velocity of the particles. For each vent, a specific range of total kinetic energy can be defined, demonstrating the strong influence of the conduit in

  6. Modified Omega-k Algorithm for High-Speed Platform Highly-Squint Staggered SAR Based on Azimuth Non-Uniform Interpolation

    PubMed Central

    Zeng, Hong-Cheng; Chen, Jie; Liu, Wei; Yang, Wei

    2015-01-01

    In this work, the staggered SAR technique is employed for high-speed platform highly-squint SAR by varying the pulse repetition interval (PRI) as a linear function of range-walk. To focus the staggered SAR data more efficiently, a low-complexity modified Omega-k algorithm is proposed based on a novel method for optimal azimuth non-uniform interpolation, avoiding zero padding in range direction for recovering range cell migration (RCM) and saving in both data storage and computational load. An approximate model on continuous PRI variation with respect to sliding receive-window is employed in the proposed algorithm, leaving a residual phase error only due to the effect of a time-varying Doppler phase caused by staggered SAR. Then, azimuth non-uniform interpolation (ANI) at baseband is carried out to compensate the azimuth non-uniform sampling (ANS) effect resulting from continuous PRI variation, which is further followed by the modified Omega-k algorithm. The proposed algorithm has a significantly lower computational complexity, but with an equally effective imaging performance, as shown in our simulation results. PMID:25664433

  7. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  8. AxonQuant: A Microfluidic Chamber Culture-Coupled Algorithm That Allows High-Throughput Quantification of Axonal Damage

    PubMed Central

    Li, Yang; Yang, Mengxue; Huang, Zhuo; Chen, Xiaoping; Maloney, Michael T.; Zhu, Li; Liu, Jianghong; Yang, Yanmin; Du, Sidan; Jiang, Xingyu; Wu, Jane Y.

    2014-01-01

    Published methods for imaging and quantitatively analyzing morphological changes in neuronal axons have serious limitations because of their small sample sizes, and their time-consuming and nonobjective nature. Here we present an improved microfluidic chamber design suitable for fast and high-throughput imaging of neuronal axons. We developed the Axon-Quant algorithm, which is suitable for automatic processing of axonal imaging data. This microfluidic chamber-coupled algorithm allows calculation of an ‘axonal continuity index’ that quantitatively measures axonal health status in a manner independent of neuronal or axonal density. This method allows quantitative analysis of axonal morphology in an automatic and nonbiased manner. Our method will facilitate large-scale high-throughput screening for genes or therapeutic compounds for neurodegenerative diseases involving axonal damage. When combined with imaging technologies utilizing different gene markers, this method will provide new insights into the mechanistic basis for axon degeneration. Our microfluidic chamber culture-coupled AxonQuant algorithm will be widely useful for studying axonal biology and neurodegenerative disorders. PMID:24603552

  9. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging.

    PubMed

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  10. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  11. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  12. Output-only modal dynamic identification of frames by a refined FDD algorithm at seismic input and high damping

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio

    2016-02-01

    The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.

  13. A Clinical Algorithm to Identify HIV Patients at High Risk for Incident Active Tuberculosis: A Prospective 5-Year Cohort Study

    PubMed Central

    Lee, Susan Shin-Jung; Lin, Hsi-Hsun; Tsai, Hung-Chin; Su, Ih-Jen; Yang, Chin-Hui; Sun, Hsin-Yun; Hung, Chien-Chin; Sy, Cheng-Len; Wu, Kuan-Sheng; Chen, Jui-Kuang; Chen, Yao-Shen; Fang, Chi-Tai

    2015-01-01

    Background Predicting the risk of tuberculosis (TB) in people living with HIV (PLHIV) using a single test is currently not possible. We aimed to develop and validate a clinical algorithm, using baseline CD4 cell counts, HIV viral load (pVL), and interferon-gamma release assay (IGRA), to identify PLHIV who are at high risk for incident active TB in low-to-moderate TB burden settings where highly active antiretroviral therapy (HAART) is routinely provided. Materials and Methods A prospective, 5-year, cohort study of adult PLHIV was conducted from 2006 to 2012 in two hospitals in Taiwan. HAART was initiated based on contemporary guidelines (CD4 count < = 350/μL). Cox regression was used to identify the predictors of active TB and to construct the algorithm. The validation cohorts included 1455 HIV-infected individuals from previous published studies. Area under the receiver operating characteristic (ROC) curve was calculated. Results Seventeen of 772 participants developed active TB during a median follow-up period of 5.21 years. Baseline CD4 < 350/μL or pVL ≥ 100,000/mL was a predictor of active TB (adjusted HR 4.87, 95% CI 1.49–15.90, P = 0.009). A positive baseline IGRA predicted TB in patients with baseline CD4 ≥ 350/μL and pVL < 100,000/mL (adjusted HR 6.09, 95% CI 1.52–24.40, P = 0.01). Compared with an IGRA-alone strategy, the algorithm improved the sensitivity from 37.5% to 76.5%, the negative predictive value from 98.5% to 99.2%. Compared with an untargeted strategy, the algorithm spared 468 (60.6%) from unnecessary TB preventive treatment. Area under the ROC curve was 0.692 (95% CI: 0.587–0.798) for the study cohort and 0.792 (95% CI: 0.776–0.808) and 0.766 in the 2 validation cohorts. Conclusions A validated algorithm incorporating the baseline CD4 cell count, HIV viral load, and IGRA status can be used to guide targeted TB preventive treatment in PLHIV in low-to-moderate TB burden settings where HAART is routinely provided to all PLHIV. The

  14. Extension of wavelet compression algorithms to 3D and 4D image data: exploitation of data coherence in higher dimensions allows very high compression ratios

    NASA Astrophysics Data System (ADS)

    Zeng, Li; Jansen, Christian; Unser, Michael A.; Hunziker, Patrick

    2001-12-01

    High resolution multidimensional image data yield huge datasets. For compression and analysis, 2D approaches are often used, neglecting the information coherence in higher dimensions, which can be exploited for improved compression. We designed a wavelet compression algorithm suited for data of arbitrary dimensions, and assessed its ability for compression of 4D medical images. Basically, separable wavelet transforms are done in each dimension, followed by quantization and standard coding. Results were compared with conventional 2D wavelet. We found that in 4D heart images, this algorithm allowed high compression ratios, preserving diagnostically important image features. For similar image quality, compression ratios using the 3D/4D approaches were typically much higher (2-4 times per added dimension) than with the 2D approach. For low-resolution images created with the requirement to keep predefined key diagnostic information (contractile function of the heart), compression ratios up to 2000 could be achieved. Thus, higher-dimensional wavelet compression is feasible, and by exploitation of data coherence in higher image dimensions allows much higher compression than comparable 2D approaches. The proven applicability of this approach to multidimensional medical imaging has important implications especially for the fields of image storage and transmission and, specifically, for the emerging field of telemedicine.

  15. Electrolytes with Improved Safety Characteristics for High Voltage, High Specific Energy Li-ion Cells

    NASA Technical Reports Server (NTRS)

    Smart, M. C.; Krause, F. C.; Hwang, C.; West, W. C.; Soler, J.; Whitcanack, L. W.; Prakash, G. K. S.; Ratnakumar, B. V.

    2012-01-01

    (1) NASA is actively pursuing the development of advanced electrochemical energy storage and conversion devices for future lunar and Mars missions; (2) The Exploration Technology Development Program, Energy Storage Project is sponsoring the development of advanced Li-ion batteries and PEM fuel cell and regenerative fuel cell systems for the Altair Lunar Lander, Extravehicular Activities (EVA), and rovers and as the primary energy storage system for Lunar Surface Systems; (3) At JPL, in collaboration with NASA-GRC, NASA-JSC and industry, we are actively developing advanced Li-ion batteries with improved specific energy, energy density and safety. One effort is focused upon developing Li-ion battery electrolyte with enhanced safety characteristics (i.e., low flammability); and (4) A number of commercial applications also require Li-ion batteries with enhanced safety, especially for automotive applications.

  16. Comparative Analysis of CNV Calling Algorithms: Literature Survey and a Case Study Using Bovine High-Density SNP Data

    PubMed Central

    Xu, Lingyang; Hou, Yali; Bickhart, Derek M.; Song, Jiuzhou; Liu, George E.

    2013-01-01

    Copy number variations (CNVs) are gains and losses of genomic sequence between two individuals of a species when compared to a reference genome. The data from single nucleotide polymorphism (SNP) microarrays are now routinely used for genotyping, but they also can be utilized for copy number detection. Substantial progress has been made in array design and CNV calling algorithms and at least 10 comparison studies in humans have been published to assess them. In this review, we first survey the literature on existing microarray platforms and CNV calling algorithms. We then examine a number of CNV calling tools to evaluate their impacts using bovine high-density SNP data. Large incongruities in the results from different CNV calling tools highlight the need for standardizing array data collection, quality assessment and experimental validation. Only after careful experimental design and rigorous data filtering can the impacts of CNVs on both normal phenotypic variability and disease susceptibility be fully revealed.

  17. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  18. Overall plant design specification Modular High Temperature Gas-cooled Reactor. Revision 9

    SciTech Connect

    1990-05-01

    Revision 9 of the ``Overall Plant Design Specification Modular High Temperature Gas-Cooled Reactor,`` DOE-HTGR-86004 (OPDS) has been completed and is hereby distributed for use by the HTGR Program team members. This document, Revision 9 of the ``Overall Plant Design Specification`` (OPDS) reflects those changes in the MHTGR design requirements and configuration resulting form approved Design Change Proposals DCP BNI-003 and DCP BNI-004, involving the Nuclear Island Cooling and Spent Fuel Cooling Systems respectively.

  19. A New Chest Compression Depth Feedback Algorithm for High-Quality CPR Based on Smartphone

    PubMed Central

    Song, Yeongtak; Oh, Jaehoon

    2015-01-01

    Abstract Background Although many smartphone application (app) programs provide education and guidance for basic life support, they do not commonly provide feedback on the chest compression depth (CCD) and rate. The validation of its accuracy has not been reported to date. This study was a feasibility assessment of use of the smartphone as a CCD feedback device. In this study, we proposed the concept of a new real-time CCD estimation algorithm using a smartphone and evaluated the accuracy of the algorithm. Materials and Methods Using the double integration of the acceleration signal, which was obtained from the accelerometer in the smartphone, we estimated the CCD in real time. Based on its periodicity, we removed the bias error from the accelerometer. To evaluate this instrument's accuracy, we used a potentiometer as the reference depth measurement. The evaluation experiments included three levels of CCD (insufficient, adequate, and excessive) and four types of grasping orientations with various compression directions. We used the difference between the reference measurement and the estimated depth as the error. The error was calculated for each compression. Results When chest compressions were performed with adequate depth for the patient who was lying on a flat floor, the mean (standard deviation) of the errors was 1.43 (1.00) mm. When the patient was lying on an oblique floor, the mean (standard deviation) of the errors was 3.13 (1.88) mm. Conclusions The error of the CCD estimation was tolerable for the algorithm to be used in the smartphone-based CCD feedback app to compress more than 51 mm, which is the 2010 American Heart Association guideline. PMID:25402865

  20. Centroiding algorithms for high speed crossed strip readout of microchannel plate detectors.

    PubMed

    Vallerga, John; Tremsin, Anton; Raffanti, Rick; Siegmund, Oswald

    2011-05-01

    Imaging microchannel plate (MCP) detectors with cross strip (XS) readout anodes require centroiding algorithms to determine the location of the amplified charge cloud from the incident radiation, be it photon or particle. We have developed a massively parallel XS readout electronic system that employs an amplifier and ADC for each strip and uses this digital data to calculate the centroid of each event in real time using a field programmable gate array (FPGA). Doing the calculations in real time in the front end electronics using an FPGA enables a much higher input event rate, nearly two orders of magnitude faster, by avoiding the bandwidth limitations of the raw data transfer to a computer. We report on our detailed efforts to optimize the algorithms used on both an 18 mm and 40 mm diameter XS MCP detector with strip pitch of 640 microns and read out with multiple 32 channel "Preshape32" ASIC amplifiers (developed at Rutherford Appleton Laboratory). Each strip electrode is continuously digitized to 12 bits at 50 MHz with all 64 digital channels (128 for the 40 mm detector) transferred to a Xilinx Virtex 5 FPGA. We describe how events are detected in the continuous data stream and then multiplexed into firmware modules that spatially and temporally filter and weight the input after applying offset and gain corrections. We will contrast a windowed "center of gravity" algorithm to a convolution with a special centroiding kernel in terms of resolution and distortion and show results with < 20 microns FWHM resolution at input rates > 1 MHz. PMID:21918588

  1. An oscillograms processing algorithm of a high power transformer on the basis of experimental data

    NASA Astrophysics Data System (ADS)

    Vasileva, O. V.; Budko, A. A.; Lavrinovich, A. V.

    2016-04-01

    The paper presents the studies on digital processing of oscillograms of the power transformer operation allowing determining the state of its windings of different types and degrees of damage. The study was carried out according to the authors' own methods using the Fourier analysis and the developed program based on the following application software packages: MathCAD and Lab View. The efficiency of the algorithm was demonstrated by the example of the waveform non-defective and defective transformers on the basis of the method of nanosecond pulses.

  2. Cargo identification algorithms facilitating unmanned/unattended inspection at high throughput portals

    NASA Astrophysics Data System (ADS)

    Chalmers, Alex

    2007-10-01

    A simple model is presented of a possible inspection regimen applied to each leg of a cargo containers' journey between its point of origin and destination. Several candidate modalities are proposed to be used at multiple remote locations to act as a pre-screen inspection as the target approaches a perimeter and as the primary inspection modality at the portal. Information from multiple data sets are fused to optimize the costs and performance of a network of such inspection systems. A series of image processing algorithms are presented that automatically process X-ray images of containerized cargo. The goal of this processing is to locate the container in a real time stream of traffic traversing a portal without impeding the flow of commerce. Such processing may facilitate the inclusion of unmanned/unattended inspection systems in such a network. Several samples of the processing applied to data collected from deployed systems are included. Simulated data from a notional cargo inspection system with multiple sensor modalities and advanced data fusion algorithms are also included to show the potential increased detection and throughput performance of such a configuration.

  3. Wide Operating Temperature Range Electrolytes for High Voltage and High Specific Energy Li-Ion Cells

    NASA Technical Reports Server (NTRS)

    Smart, M. C.; Hwang, C.; Krause, F. C.; Soler, J.; West, W. C.; Ratnakumar, B. V.; Amine, K.

    2012-01-01

    A number of electrolyte formulations that have been designed to operate over a wide temperature range have been investigated in conjunction with layered-layered metal oxide cathode materials developed at Argonne. In this study, we have evaluated a number of electrolytes in Li-ion cells consisting of Conoco Phillips A12 graphite anodes and Toda HE5050 Li(1.2)Ni(0.15)Co(0.10)Mn(0.55)O2 cathodes. The electrolytes studied consisted of LiPF6 in carbonate-based electrolytes that contain ester co-solvents with various solid electrolyte interphase (SEI) promoting additives, many of which have been demonstrated to perform well in 4V systems. More specifically, we have investigated the performance of a number of methyl butyrate (MB) containing electrolytes (i.e., LiPF6 in ethylene carbonate (EC) + ethyl methyl carbonate (EMC) + MB (20:20:60 v/v %) that contain various additives, including vinylene carbonate, lithium oxalate, and lithium bis(oxalato)borate (LiBOB). When these systems were evaluated at various rates at low temperatures, the methyl butyrate-based electrolytes resulted in improved rate capability compared to cells with all carbonate-based formulations. It was also ascertained that the slow cathode kinetics govern the generally poor rate capability at low temperature in contrast to traditionally used LiNi(0.80)Co(0.15)Al(0.05)O2-based systems, rather than being influenced strongly by the electrolyte type.

  4. Brief Report: Exploratory Analysis of the ADOS Revised Algorithm--Specificity and Predictive Value with Hispanic Children Referred for Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Overton, Terry; Fielding, Cheryl; de Alba, Roman Garcia

    2008-01-01

    This study compared Autism diagnostic observation schedule (ADOS) algorithm scores of a sample of 26 children who were administered modules 1-3 of the ADOS with the scores obtained applying the revised ADOS algorithm proposed by Gotham et al. (2007). Results of this application were inconsistent, yielding slightly more accurate results for module…

  5. Hybrid-PIC Modeling of a High-Voltage, High-Specific-Impulse Hall Thruster

    NASA Technical Reports Server (NTRS)

    Smith, Brandon D.; Boyd, Iain D.; Kamhawi, Hani; Huang, Wensheng

    2013-01-01

    The primary life-limiting mechanism of Hall thrusters is the sputter erosion of the discharge channel walls by high-energy propellant ions. Because of the difficulty involved in characterizing this erosion experimentally, many past efforts have focused on numerical modeling to predict erosion rates and thruster lifespan, but those analyses were limited to Hall thrusters operating in the 200-400V discharge voltage range. Thrusters operating at higher discharge voltages (V(sub d) >= 500 V) present an erosion environment that may differ greatly from that of the lower-voltage thrusters modeled in the past. In this work, HPHall, a well-established hybrid-PIC code, is used to simulate NASA's High-Voltage Hall Accelerator (HiVHAc) at discharge voltages of 300, 400, and 500V as a first step towards modeling the discharge channel erosion. It is found that the model accurately predicts the thruster performance at all operating conditions to within 6%. The model predicts a normalized plasma potential profile that is consistent between all three operating points, with the acceleration zone appearing in the same approximate location. The expected trend of increasing electron temperature with increasing discharge voltage is observed. An analysis of the discharge current oscillations shows that the model predicts oscillations that are much greater in amplitude than those measured experimentally at all operating points, suggesting that the differences in oscillation amplitude are not strongly associated with discharge voltage.

  6. Application of wavelet neural network model based on genetic algorithm in the prediction of high-speed railway settlement

    NASA Astrophysics Data System (ADS)

    Tang, Shihua; Li, Feida; Liu, Yintao; Lan, Lan; Zhou, Conglin; Huang, Qing

    2015-12-01

    With the advantage of high speed, big transport capacity, low energy consumption, good economic benefits and so on, high-speed railway is becoming more and more popular all over the world. It can reach 350 kilometers per hour, which requires high security performances. So research on the prediction of high-speed railway settlement that as one of the important factors affecting the safety of high-speed railway becomes particularly important. This paper takes advantage of genetic algorithms to seek all the data in order to calculate the best result and combines the advantage of strong learning ability and high accuracy of wavelet neural network, then build the model of genetic wavelet neural network for the prediction of high-speed railway settlement. By the experiment of back propagation neural network, wavelet neural network and genetic wavelet neural network, it shows that the absolute value of residual errors in the prediction of high-speed railway settlement based on genetic algorithm is the smallest, which proves that genetic wavelet neural network is better than the other two methods. The correlation coefficient of predicted and observed value is 99.9%. Furthermore, the maximum absolute value of residual error, minimum absolute value of residual error-mean value of relative error and value of root mean squared error(RMSE) that predicted by genetic wavelet neural network are all smaller than the other two methods'. The genetic wavelet neural network in the prediction of high-speed railway settlement is more stable in terms of stability and more accurate in the perspective of accuracy.

  7. Hollow carbon nanofiber-encapsulated sulfur cathodes for high specific capacity rechargeable lithium batteries.

    PubMed

    Zheng, Guangyuan; Yang, Yuan; Cha, Judy J; Hong, Seung Sae; Cui, Yi

    2011-10-12

    Sulfur has a high specific capacity of 1673 mAh/g as lithium battery cathodes, but its rapid capacity fading due to polysulfides dissolution presents a significant challenge for practical applications. Here we report a hollow carbon nanofiber-encapsulated sulfur cathode for effective trapping of polysulfides and demonstrate experimentally high specific capacity and excellent electrochemical cycling of the cells. The hollow carbon nanofiber arrays were fabricated using anodic aluminum oxide (AAO) templates, through thermal carbonization of polystyrene. The AAO template also facilitates sulfur infusion into the hollow fibers and prevents sulfur from coating onto the exterior carbon wall. The high aspect ratio of the carbon nanofibers provides an ideal structure for trapping polysulfides, and the thin carbon wall allows rapid transport of lithium ions. The small dimension of these nanofibers provides a large surface area per unit mass for Li(2)S deposition during cycling and reduces pulverization of electrode materials due to volumetric expansion. A high specific capacity of about 730 mAh/g was observed at C/5 rate after 150 cycles of charge/discharge. The introduction of LiNO(3) additive to the electrolyte was shown to improve the Coulombic efficiency to over 99% at C/5. The results show that the hollow carbon nanofiber-encapsulated sulfur structure could be a promising cathode design for rechargeable Li/S batteries with high specific energy. PMID:21916442

  8. Evaluation of an algorithm for semiautomated segmentation of thin tissue layers in high-frequency ultrasound images.

    PubMed

    Qiu, Qiang; Dunmore-Buyze, Joy; Boughner, Derek R; Lacefield, James C

    2006-02-01

    An algorithm consisting of speckle reduction by median filtering, contrast enhancement using top- and bottom-hat morphological filters, and segmentation with a discrete dynamic contour (DDC) model was implemented for nondestructive measurements of soft tissue layer thickness. Algorithm performance was evaluated by segmenting simulated images of three-layer phantoms and high-frequency (40 MHz) ultrasound images of porcine aortic valve cusps in vitro. The simulations demonstrated the necessity of the median and morphological filtering steps and enabled testing of user-specified parameters of the morphological filters and DDC model. In the experiments, six cusps were imaged in coronary perfusion solution (CPS) then in distilled water to test the algorithm's sensitivity to changes in the dimensions of thin tissue layers. Significant increases in the thickness of the fibrosa, spongiosa, and ventricularis layers, by 53.5% (p < 0.001), 88.5% (p < 0.001), and 35.1% (p = 0.033), respectively, were observed when the specimens were submerged in water. The intraobserver coefficient of variation of repeated thickness estimates ranged from 0.044 for the fibrosa in water to 0.164 for the spongiosa in CPS. Segmentation accuracy and variability depended on the thickness and contrast of the layers, but the modest variability provides confidence in the thickness measurements. PMID:16529107

  9. High-Resolution Functional Connectivity Density: Hub Locations, Sensitivity, Specificity, Reproducibility, and Reliability.

    PubMed

    Tomasi, Dardo; Shokri-Kojori, Ehsan; Volkow, Nora D

    2016-07-01

    Brain regions with high connectivity have high metabolic cost and their disruption is associated with neuropsychiatric disorders. Prior neuroimaging studies have identified at the group-level local functional connectivity density ( L: FCD) hubs, network nodes with high degree of connectivity with neighboring regions, in occipito-parietal cortices. However, the individual patterns and the precision for the location of the hubs were limited by the restricted spatiotemporal resolution of the magnetic resonance imaging (MRI) measures collected at rest. In this work, we show that MRI datasets with higher spatiotemporal resolution (2-mm isotropic; 0.72 s), collected under the Human Connectome Project (HCP), provide a significantly higher precision for hub localization and for the first time reveal L: FCD patterns with gray matter (GM) specificity >96% and sensitivity >75%. High temporal resolution allowed effective 0.01-0.08 Hz band-pass filtering, significantly reducing spurious L: FCD effects in white matter. These high spatiotemporal resolution L: FCD measures had high reliability [intraclass correlation, ICC(3,1) > 0.6] but lower reproducibility (>67%) than the low spatiotemporal resolution equivalents. GM sensitivity and specificity benchmarks showed the robustness of L: FCD to changes in model parameter and preprocessing steps. Mapping individual's brain hubs with high sensitivity, specificity, and reproducibility supports the use of L: FCD as a biomarker for clinical applications in neuropsychiatric disorders. PMID:26223259

  10. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive

  11. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  12. Novel method for the high-throughput production of phosphorylation site-specific monoclonal antibodies.

    PubMed

    Kurosawa, Nobuyuki; Wakata, Yuka; Inobe, Tomonao; Kitamura, Haruki; Yoshioka, Megumi; Matsuzawa, Shun; Kishi, Yoshihiro; Isobe, Masaharu

    2016-01-01

    Threonine phosphorylation accounts for 10% of all phosphorylation sites compared with 0.05% for tyrosine and 90% for serine. Although monoclonal antibody generation for phospho-serine and -tyrosine proteins is progressing, there has been limited success regarding the production of monoclonal antibodies against phospho-threonine proteins. We developed a novel strategy for generating phosphorylation site-specific monoclonal antibodies by cloning immunoglobulin genes from single plasma cells that were fixed, intracellularly stained with fluorescently labeled peptides and sorted without causing RNA degradation. Our high-throughput fluorescence activated cell sorting-based strategy, which targets abundant intracellular immunoglobulin as a tag for fluorescently labeled antigens, greatly increases the sensitivity and specificity of antigen-specific plasma cell isolation, enabling the high-efficiency production of monoclonal antibodies with desired antigen specificity. This approach yielded yet-undescribed guinea pig monoclonal antibodies against threonine 18-phosphorylated p53 and threonine 68-phosphorylated CHK2 with high affinity and specificity. Our method has the potential to allow the generation of monoclonal antibodies against a variety of phosphorylated proteins. PMID:27125496

  13. Novel method for the high-throughput production of phosphorylation site-specific monoclonal antibodies

    PubMed Central

    Kurosawa, Nobuyuki; Wakata, Yuka; Inobe, Tomonao; Kitamura, Haruki; Yoshioka, Megumi; Matsuzawa, Shun; Kishi, Yoshihiro; Isobe, Masaharu

    2016-01-01

    Threonine phosphorylation accounts for 10% of all phosphorylation sites compared with 0.05% for tyrosine and 90% for serine. Although monoclonal antibody generation for phospho-serine and -tyrosine proteins is progressing, there has been limited success regarding the production of monoclonal antibodies against phospho-threonine proteins. We developed a novel strategy for generating phosphorylation site-specific monoclonal antibodies by cloning immunoglobulin genes from single plasma cells that were fixed, intracellularly stained with fluorescently labeled peptides and sorted without causing RNA degradation. Our high-throughput fluorescence activated cell sorting-based strategy, which targets abundant intracellular immunoglobulin as a tag for fluorescently labeled antigens, greatly increases the sensitivity and specificity of antigen-specific plasma cell isolation, enabling the high-efficiency production of monoclonal antibodies with desired antigen specificity. This approach yielded yet-undescribed guinea pig monoclonal antibodies against threonine 18-phosphorylated p53 and threonine 68-phosphorylated CHK2 with high affinity and specificity. Our method has the potential to allow the generation of monoclonal antibodies against a variety of phosphorylated proteins. PMID:27125496

  14. Development of a phantom to validate high-dose-rate brachytherapy treatment planning systems with heterogeneous algorithms

    SciTech Connect

    Moura, Eduardo S.; Rostelato, Maria Elisa C. M.; Zeituni, Carlos A.

    2015-04-15

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. To compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The

  15. Optimization of high speed pipelining in FPGA-based FIR filter design using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Botella, Guillermo; Romero, David E. T.; Kumm, Martin

    2012-06-01

    This paper compares FPGA-based full pipelined multiplierless FIR filter design options. Comparison of Distributed Arithmetic (DA), Common Sub-Expression (CSE) sharing and n-dimensional Reduced Adder Graph (RAG-n) multiplierless filter design methods in term of size, speed, and A*T product are provided. Since DA designs are table-based and CSE/RAG-n designs are adder-based, FPGA synthesis design data are used for a realistic comparison. Superior results of a genetic algorithm based optimization of pipeline registers and non-output fundamental coefficients are shown. FIR filters (posted as open source by Kastner et al.) for filters in the length from 6 to 151 coefficients are used.

  16. a Pressure-Based Algorithm for High-Speed Turbomachinery Flows

    NASA Astrophysics Data System (ADS)

    Politis, E. S.; Giannakoglou, K. C.

    1997-07-01

    The steady state Navier-Stokes equations are solved in transonic flows using an elliptic formulation. A segregated solution algorithm is established in which the pressure correction equation is utilized to enforce the divergence-free mass flux constraint. The momentum equations are solved in terms of the primitive variables, while the pressure correction field is used to update both the convecting mass flux components and the pressure itself. The velocity components are deduced from the corrected mass fluxes on the basis of an upwind-biased density, which is a mechanism capable of overcoming the ellipticity of the system of equations, in the transonic flow regime. An incomplete LU decomposition is used for the solution of the transport-type equations and a globally minimized residual method resolves the pressure correction equation. Turbulence is resolved through the k- model. Dealing with turbomachinery applications, results are presented in two-dimensional compressor and turbine cascades under design and off-design conditions.

  17. Highly Specific Detection of Five Exotic Quarantine Plant Viruses using RT-PCR.

    PubMed

    Choi, Hoseong; Cho, Won Kyong; Yu, Jisuk; Lee, Jong-Seung; Kim, Kook-Hyung

    2013-03-01

    To detect five plant viruses (Beet black scorch virus, Beet necrotic yellow vein virus, Eggplant mottled dwarf virus, Pelargonium zonate spot virus, and Rice yellow mottle virus) for quarantine purposes, we designed 15 RT-PCR primer sets. Primer design was based on the nucleotide sequence of the coat protein gene, which is highly conserved within species. All but one primer set successfully amplified the targets, and gradient PCRs indicated that the optimal temperature for the 14 useful primer sets was 51.9°C. Some primer sets worked well regardless of annealing temperature while others required a very specific annealing temperature. A primer specificity test using plant total RNAs and cDNAs of other plant virus-infected samples demonstrated that the designed primer sets were highly specific and generated reproducible results. The newly developed RT-PCR primer sets would be useful for quarantine inspections aimed at preventing the entry of exotic plant viruses into Korea. PMID:25288934

  18. Structure-based Design of Peptides with High Affinity and Specificity to HER2 Positive Tumors

    PubMed Central

    Geng, Lingling; Wang, Zihua; Yang, Xiaoliang; Li, Dan; Lian, Wenxi; Xiang, Zhichu; Wang, Weizhi; Bu, Xiangli; Lai, Wenjia; Hu, Zhiyuan; Fang, Qiaojun

    2015-01-01

    To identify peptides with high affinity and specificity against human epidermal growth factor receptor 2 (HER2), a series of peptides were designed based on the structure of HER2 and its Z(HER2:342) affibody. By using a combination protocol of molecular dynamics modeling, MM/GBSA binding free energy calculations, and binding free energy decomposition analysis, two novel peptides with 27 residues, pep27 and pep27-24M, were successfully obtained. Immunocytochemistry and flow cytometry analysis verified that both peptides can specifically bind to the extracellular domain of HER2 protein at cellular level. The Surface Plasmon Resonance imaging (SPRi) analysis showed that dissociation constants (KD) of these two peptides were around 300 nmol/L. Furthermore, fluorescence imaging of peptides against nude mice xenografted with SKBR3 cells indicated that both peptides have strong affinity and high specificity to HER2 positive tumors. PMID:26284145

  19. Porous silicon structures with high surface area/specific pore size

    DOEpatents

    Northrup, M. Allen; Yu, Conrad M.; Raley, Norman F.

    1999-01-01

    Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gasses in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters.

  20. Porous silicon structures with high surface area/specific pore size

    DOEpatents

    Northrup, M.A.; Yu, C.M.; Raley, N.F.

    1999-03-16

    Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gases in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters. 9 figs.

  1. Highly specific detection of genetic modification events using an enzyme-linked probe hybridization chip.

    PubMed

    Zhang, M Z; Zhang, X F; Chen, X M; Chen, X; Wu, S; Xu, L L

    2015-01-01

    The enzyme-linked probe hybridization chip utilizes a method based on ligase-hybridizing probe chip technology, with the principle of using thio-primers for protection against enzyme digestion, and using lambda DNA exonuclease to cut multiple PCR products obtained from the sample being tested into single-strand chains for hybridization. The 5'-end amino-labeled probe was fixed onto the aldehyde chip, and hybridized with the single-stranded PCR product, followed by addition of a fluorescent-modified probe that was then enzymatically linked with the adjacent, substrate-bound probe in order to achieve highly specific, parallel, and high-throughput detection. Specificity and sensitivity testing demonstrated that enzyme-linked probe hybridization technology could be applied to the specific detection of eight genetic modification events at the same time, with a sensitivity reaching 0.1% and the achievement of accurate, efficient, and stable results. PMID:26345863

  2. Structure-based Design of Peptides with High Affinity and Specificity to HER2 Positive Tumors.

    PubMed

    Geng, Lingling; Wang, Zihua; Yang, Xiaoliang; Li, Dan; Lian, Wenxi; Xiang, Zhichu; Wang, Weizhi; Bu, Xiangli; Lai, Wenjia; Hu, Zhiyuan; Fang, Qiaojun

    2015-01-01

    To identify peptides with high affinity and specificity against human epidermal growth factor receptor 2 (HER2), a series of peptides were designed based on the structure of HER2 and its Z(HER2:342) affibody. By using a combination protocol of molecular dynamics modeling, MM/GBSA binding free energy calculations, and binding free energy decomposition analysis, two novel peptides with 27 residues, pep27 and pep27-24M, were successfully obtained. Immunocytochemistry and flow cytometry analysis verified that both peptides can specifically bind to the extracellular domain of HER2 protein at cellular level. The Surface Plasmon Resonance imaging (SPRi) analysis showed that dissociation constants (K D) of these two peptides were around 300 nmol/L. Furthermore, fluorescence imaging of peptides against nude mice xenografted with SKBR3 cells indicated that both peptides have strong affinity and high specificity to HER2 positive tumors. PMID:26284145

  3. A potent and highly specific FN3 monobody inhibitor of the Abl SH2 domain

    SciTech Connect

    Wojcik, John; Hantschel, Oliver; Grebien, Florian; Kaupe, Ines; Bennett, Keiryn L.; Barkinge, John; Jones, Richard B.; Koide, Akiko; Superti-Furga, Giulio; Koide, Shohei

    2010-09-02

    Interactions between Src homology 2 (SH2) domains and phosphotyrosine sites regulate tyrosine kinase signaling networks. Selective perturbation of these interactions is challenging due to the high homology among the 120 human SH2 domains. Using an improved phage-display selection system, we generated a small antibody mimic (or 'monobody'), termed HA4, that bound to the Abelson (Abl) kinase SH2 domain with low nanomolar affinity. SH2 protein microarray analysis and MS of intracellular HA4 interactors showed HA4's specificity, and a crystal structure revealed how this specificity is achieved. HA4 disrupted intramolecular interactions of Abl involving the SH2 domain and potently activated the kinase in vitro. Within cells, HA4 inhibited processive phosphorylation activity of Abl and also inhibited STAT5 activation. This work provides a design guideline for highly specific and potent inhibitors of a protein interaction domain and shows their utility in mechanistic and cellular investigations.

  4. Chemical synthesis of nucleoside-gamma-[32P]triphosphates of high specific activity.

    PubMed

    Janecka, A; Panusz, H; Pankowski, J; Koziołkiewicz, W

    1980-01-01

    A simple chemical procedure for the preparation of four common ribonucleoside 5-gamma-[32P]triphosphates of high specific activity (up to 10 Ci/mmole) based on the condensation of orthophosphoric acid with the corresponding nucleoside 5-diphosphate in the presence of ethyl chloroformate as well as the methods of purification and identification of the products are described. PMID:7375446

  5. Effects of Collaborative Preteaching on Science Performance of High School Students with Specific Learning Disabilities

    ERIC Educational Resources Information Center

    Thornton, Amanda; McKissick, Bethany R.; Spooner, Fred; Lo, Ya-yu; Anderson, Adrienne L.

    2015-01-01

    Investigating the effectiveness of inclusive practices in science instruction and determining how to best support high school students with specific learning disabilities (SLD) in the general education classroom is a topic of increasing research attention in the field. In this study, the researchers conducted a single-subject multiple probe across…

  6. EXPLOITATION OF THE HIGH AFFINITY AND SPECIFICITY OF PROTEINS IN WASTE STREAM TREATMENT

    EPA Science Inventory

    The purpose of the research was to test the feasibility of using immobilized proteins as highly specific adsorbers of pollutants in waste streams. The Escherichia coli periplasmic phosphate-binding protein served as both a model system for determining the feasibility of such an a...

  7. 75 FR 33731 - Atlantic Highly Migratory Species; 2010 Atlantic Bluefin Tuna Quota Specifications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-15

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration 50 CFR Part 635 RIN 0648-AY77 Atlantic Highly Migratory Species; 2010 Atlantic Bluefin Tuna Quota Specifications Correction In rule document 2010-13207...

  8. Using the SCR Specification Technique in a High School Programming Course.

    ERIC Educational Resources Information Center

    Rosen, Edward; McKim, James C., Jr.

    1992-01-01

    Presents the underlying ideas of the Software Cost Reduction (SCR) approach to requirements specifications. Results of applying this approach to the teaching of programing to high school students indicate that students perform better in writing programs. An appendix provides two examples of how the method is applied to problem solving. (MDH)

  9. PODOPHYLLUM PELTATUM POSSESSES A BETA-GLUCOSIDASE WITH HIGH SUBSTRATE SPECIFICITY FOR THE ARYLTETRALIN LIGNAN PODOPHYLLOTOXIN

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A beta-glucosidase with high specificity for podophyllotoxin-4-O-b-d-glucopyranoside was purified from the leaves of Podophyllum peltatum. The 65 kD polypeptide had optimum activity at pH 5.0 and was essentially inactive at physiological pH (6.5 or above). The maximum catalytic activity of this glu...

  10. Computations of two passing-by high-speed trains by a relaxation overset-grid algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jenn-Long

    2004-04-01

    This paper presents a relaxation algorithm, which is based on the overset grid technology, an unsteady three-dimensional Navier-Stokes flow solver, and an inner- and outer-relaxation method, for simulation of the unsteady flows of moving high-speed trains. The flow solutions on the overlapped grids can be accurately updated by introducing a grid tracking technique and the inner- and outer-relaxation method. To evaluate the capability and solution accuracy of the present algorithm, the computational static pressure distribution of a single stationary TGV high-speed train inside a long tunnel is investigated numerically, and is compared with the experimental data from low-speed wind tunnel test. Further, the unsteady flows of two TGV high-speed trains passing by each other inside a long tunnel and at the tunnel entrance are simulated. A series of time histories of pressure distributions and aerodynamic loads acting on the train and tunnel surfaces are depicted for detailed discussions.

  11. A high-speed, high-efficiency phase controller for coherent beam combining based on SPGD algorithm

    SciTech Connect

    Huang, Zh M; Liu, C L; Li, J F; Zhang, D Y

    2014-04-28

    A phase controller for coherent beam combining (CBC) of fibre lasers has been designed and manufactured based on a stochastic parallel gradient descent (SPGD) algorithm and a field programmable gate array (FPGA). The theoretical analysis shows that the iteration rate is higher than 1.9 MHz, and the average compensation bandwidth of CBC for 5 or 20 channels is 50 kHz or 12.5 kHz, respectively. The tests show that the phase controller ensures reliable phase locking of lasers: When the phases of five lasers are locked by the improved control strategy with a variable gain, the energy encircled in the target is increased by 23 times than that in the single output, the phase control accuracy is better than λ/20, and the combining efficiency is 92%. (control of laser radiation parameters)

  12. Study on very high speed Reed-Solomon decoders using modified euclidean algorithm for volume holographic storage

    NASA Astrophysics Data System (ADS)

    Wu, Fei; Xie, Changsheng; Liu, ZhaoBin

    2003-04-01

    Volume holography is currently the subject of widespread interest as a fast-readout-rate, high-capacity digital data-storage technology. However, due to the effect of cross-talk noise, scattering noise, noise gratings formed during a multiple exposure schedule, it brings a lot of burst errors and random errors in the system. Reed-Solomon error-correction codes have been widely used to protect digital data against errors. This paper presents VLSI implementations of an 16 errors correcting (255,223) Reed-Solomon decoder architecture for volume holographic storage. We describe the Reed-Solomon decoders using modified Euclidean algorithms which are regular and simple, and naturally suitable for VLSI implementations. We design the speedily multiplication for GF(28) and pipeline structure to solve hardware complexity and high data processing rate for the Reed-Solomon decoders. We adopt high speed FPGA and have a data processing rate of 200 Mbit/s.

  13. High-energy mode-locked fiber lasers using multiple transmission filters and a genetic algorithm.

    PubMed

    Fu, Xing; Kutz, J Nathan

    2013-03-11

    We theoretically demonstrate that in a laser cavity mode-locked by nonlinear polarization rotation (NPR) using sets of waveplates and passive polarizer, the energy performance can be significantly increased by incorporating multiple NPR filters. The NPR filters are engineered so as to mitigate the multi-pulsing instability in the laser cavity which is responsible for limiting the single pulse per round trip energy in a myriad of mode-locked cavities. Engineering of the NPR filters for performance is accomplished by implementing a genetic algorithm that is capable of systematically identifying viable and optimal NPR settings in a vast parameter space. Our study shows that five NPR filters can increase the cavity energy by approximately a factor of five, with additional NPRs contributing little or no enhancements beyond this. With the advent and demonstration of electronic controls for waveplates and polarizers, the analysis suggests a general design and engineering principle that can potentially close the order of magnitude energy gap between fiber based mode-locked lasers and their solid state counterparts. PMID:23482223

  14. Analysis of high resolution FTIR spectra from synchrotron sources using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    van Wijngaarden, Jennifer; Desmond, Durell; Leo Meerts, W.

    2015-09-01

    Room temperature Fourier transform infrared spectra of the four-membered heterocycle trimethylene sulfide were collected with a resolution of 0.00096 cm-1 using synchrotron radiation from the Canadian Light Source from 500 to 560 cm-1. The in-plane ring deformation mode (ν13) at ∼529 cm-1 exhibits dense rotational structure due to the presence of ring inversion tunneling and leads to a doubling of all transitions. Preliminary analysis of the experimental spectrum was pursued via traditional methods involving assignment of quantum numbers to individual transitions in order to conduct least squares fitting to determine the spectroscopic parameters. Following this approach, the assignment of 2358 transitions led to the experimental determination of an effective Hamiltonian. This model describes transitions in the P and R branches to J‧ = 60 and Ka‧ = 10 that connect the tunneling split ground and vibrationally excited states of the ν13 band although a small number of low intensity features remained unassigned. The use of evolutionary algorithms (EA) for automated assignment was explored in tandem and yielded a set of spectroscopic constants that re-create this complex experimental spectrum to a similar degree. The EA routine was also applied to the previously well-understood ring puckering vibration of another four-membered ring, azetidine (Zaporozan et al., 2010). This test provided further evidence of the robust nature of the EA method when applied to spectra for which the underlying physics is well understood.

  15. Highly efficient numerical algorithm based on random trees for accelerating parallel Vlasov-Poisson simulations

    NASA Astrophysics Data System (ADS)

    Acebrón, Juan A.; Rodríguez-Rozas, Ángel

    2013-10-01

    An efficient numerical method based on a probabilistic representation for the Vlasov-Poisson system of equations in the Fourier space has been derived. This has been done theoretically for arbitrary dimensional problems, and particularized to unidimensional problems for numerical purposes. Such a representation has been validated theoretically in the linear regime comparing the solution obtained with the classical results of the linear Landau damping. The numerical strategy followed requires generating suitable random trees combined with a Padé approximant for approximating accurately a given divergent series. Such series are obtained by summing the partial contributions to the solution coming from trees with arbitrary number of branches. These contributions, coming in general from multi-dimensional definite integrals, are efficiently computed by a quasi-Monte Carlo method. It is shown how the accuracy of the method can be effectively increased by considering more terms of the series. The new representation was used successfully to develop a Probabilistic Domain Decomposition method suited for massively parallel computers, which improves the scalability found in classical methods. Finally, a few numerical examples based on classical phenomena such as the non-linear Landau damping, and the two streaming instability are given, illustrating the remarkable performance of the algorithm, when compared the results with those obtained using a classical method.

  16. Improving TCP throughput performance on high-speed networks with a receiver-side adaptive acknowledgment algorithm

    NASA Astrophysics Data System (ADS)

    Yeung, Wing-Keung; Chang, Rocky K. C.

    1998-12-01

    A drastic TCP performance degradation was reported when TCP is operated on the ATM networks. This deadlock problem is 'caused' by the high speed provided by the ATM networks. Therefore this deadlock problem is shared by any high-speed networking technologies when TCP is run on them. The problems are caused by the interaction of the sender-side and receiver-side Silly Window Syndrome (SWS) avoidance algorithms because the network's Maximum Segment Size (MSS) is no longer small when compared with the sender and receiver socket buffer sizes. Here we propose a new receiver-side adaptive acknowledgment algorithm (RSA3) to eliminate the deadlock problems while maintaining the SWS avoidance mechanisms. Unlike the current delayed acknowledgment strategy, the RSA3 does not rely on the exact value of MSS an the receiver's buffer size to determine the acknowledgement threshold.Instead the RSA3 periodically probes the sender to estimate the maximum amount of data that can be sent without receiving acknowledgement from the receiver. The acknowledgment threshold is computed as 35 percent of the estimate. In this way, deadlock-free TCP transmission is guaranteed. Simulation studies have shown that the RSA3 even improves the throughput performance in some non-deadlock regions. This is due to a quicker response taken by the RSA3 receiver. We have also evaluated different acknowledgment thresholds. It is found that the case of 35 percent gives the best performance when the sender and receiver buffer sizes are large.

  17. Direct glass bonded high specific power silicon solar cells for space applications

    NASA Technical Reports Server (NTRS)

    Dinetta, L. C.; Rand, J. A.; Cummings, J. R.; Lampo, S. M.; Shreve, K. P.; Barnett, Allen M.

    1991-01-01

    A lightweight, radiation hard, high performance, ultra-thin silicon solar cell is described that incorporates light trapping and a cover glass as an integral part of the device. The manufacturing feasibility of high specific power, radiation insensitive, thin silicon solar cells was demonstrated experimentally and with a model. Ultra-thin, light trapping structures were fabricated and the light trapping demonstrated experimentally. The design uses a micro-machined, grooved back surface to increase the optical path length by a factor of 20. This silicon solar cell will be highly tolerant to radiation because the base width is less than 25 microns making it insensitive to reduction in minority carrier lifetime. Since the silicon is bonded without silicone adhesives, this solar cell will also be insensitive to UV degradation. These solar cells are designed as a form, fit, and function replacement for existing state of the art silicon solar cells with the effect of simultaneously increasing specific power, power/area, and power supply life. Using a 3-mil thick cover glass and a 0.3 g/sq cm supporting Al honeycomb, a specific power for the solar cell plus cover glass and honeycomb of 80.2 W/Kg is projected. The development of this technology can result in a revolutionary improvement in high survivability silicon solar cell products for space with the potential to displace all existing solar cell technologies for single junction space applications.

  18. Phthalonitrile-Based Carbon Foam with High Specific Mechanical Strength and Superior Electromagnetic Interference Shielding Performance.

    PubMed

    Zhang, Liying; Liu, Ming; Roy, Sunanda; Chu, Eng Kee; See, Kye Yak; Hu, Xiao

    2016-03-23

    Electromagnetic interference (EMI) performance materials are urgently needed to relieve the increasing stress over electromagnetic pollution problems arising from the growing demand for electronic and electrical devices. In this work, a novel ultralight (0.15 g/cm(3)) carbon foam was prepared by direct carbonization of phthalonitrile (PN)-based polymer foam aiming to simultaneously achieve high EMI shielding effectiveness (SE) and deliver effective weight reduction without detrimental reduction of the mechanical properties. The carbon foam prepared by this method had specific compressive strength of ∼6.0 MPa·cm(3)/g. High EMI SE of ∼51.2 dB was achieved, contributed by its intrinsic nitrogen-containing structure (3.3 wt% of nitrogen atoms). The primary EMI shielding mechanism of such carbon foam was determined to be absorption. Moreover, the carbon foams showed excellent specific EMI SE of 341.1 dB·cm(3)/g, which was at least 2 times higher than most of the reported material. The remarkable EMI shielding performance combined with high specific compressive strength indicated that the carbon foam could be considered as a low-density and high-performance EMI shielding material for use in areas where mechanical integrity is desired. PMID:26910405

  19. Characterization of specific high affinity receptors for human tumor necrosis factor on mouse fibroblasts

    SciTech Connect

    Hass, P.E.; Hotchkiss, A.; Mohler, M.; Aggarwal, B.B.

    1985-10-05

    Mouse L-929 fibroblasts, an established line of cells, are very sensitive to lysis by human lymphotoxin (hTNF-beta). Specific binding of a highly purified preparation of hTNF-beta to these cells was examined. Recombinant DNA-derived hTNF-beta was radiolabeled with (TH)propionyl succinimidate at the lysine residues of the molecule to a specific activity of 200 microCi/nmol of protein. (TH)hTNF-beta was purified by high performance gel permeation chromatography and the major fraction was found to be monomeric by sodium dodecyl sulfate-polyacrylamide gel electrophoresis. The labeled hTNF-beta was fully active in causing lysis of L-929 fibroblasts and bound specifically to high affinity binding sites on these cells. Scatchard analysis of the binding data revealed the presence of a single class of high affinity receptors with an apparent Kd of 6.7 X 10(-11) M and a capacity of 3200 binding sites/cell. Unlabeled recombinant DNA-derived hTNF-beta was found to be approximately 5-fold more effective competitive inhibitor of binding than the natural hTNF-beta. The binding of hTNF-beta to these mouse fibroblasts was also correlated with the ultimate cell lysis. Neutralizing polyclonal antibodies to hTNF-beta efficiently inhibited the binding of (TH)hTNF-beta to the cells. The authors conclude that the specific high affinity binding site is the receptor for hTNF-beta and may be involved in lysis of cells.

  20. Cyclotron production of ``very high specific activity'' platinum radiotracers in No Carrier Added form

    NASA Astrophysics Data System (ADS)

    Birattari, C.; Bonardi, M.; Groppi, F.; Gini, L.; Gallorini, M.; Sabbioni, E.; Stroosnijder, M. F.

    2001-12-01

    At the "Radiochemistry Laboratory" of Accelerators and Applied Superconductivity Laboratory, LASA, several production and quality assurance methods for short-lived and high specific activity radionuclides, have been developed. Presently, the irradiations are carried out at the Scanditronix MC40 cyclotron (K=38; p, d, He-4 and He-3) of JRC-Ispra, Italy, of the European Community, while both chemical purity and specific activity determination are carried out at the TRIGA MARK II research reactor of University of Pavia and at LASA itself. In order to optimize the irradiation conditions for platinum radiotracer production, both thin- and thick-target excitation function of natOs(α,xn) nuclear reactions were measured. A very selective radiochemical separation to obtain Pt radiotracers in No Carrier Added form, has been developed. Both real specific activity and chemical purity of radiotracer, have been determined by neutron activation analysis and atomic absorption spectrometry. An Isotopic Dilution Factor (IDF) of the order of 50 is achieved.

  1. Synthesis of high specific activity (1- sup 3 H) farnesyl pyrophosphate

    SciTech Connect

    Saljoughian, M.; Morimoto, H.; Williams, P.G.

    1991-08-01

    The synthesis of tritiated farnesyl pyrophosphate with high specific activity is reported. trans-trans Farnesol was oxidized to the corresponding aldehyde followed by reduction with lithium aluminium tritide (5%-{sup 3}H) to give trans-trans (1-{sup 3}H)farnesol. The specific radioactivity of the alcohol was determined from its triphenylsilane derivative, prepared under very mild conditions. The tritiated alcohol was phosphorylated by initial conversion to an allylic halide, and subsequent treatment of the halide with tris-tetra-n-butylammonium hydrogen pyrophosphate. The hydride procedure followed in this work has advantages over existing methods for the synthesis of tritiated farnesyl pyrophosphate, with the possibility of higher specific activity and a much higher yield obtained. 10 refs., 3 figs.

  2. A Rapid In-Clinic Test Detects Acute Leptospirosis in Dogs with High Sensitivity and Specificity.

    PubMed

    Kodjo, Angeli; Calleja, Christophe; Loenser, Michael; Lin, Dan; Lizer, Joshua

    2016-01-01

    A rapid IgM-detection immunochromatographic test (WITNESS® Lepto, Zoetis) has recently become available to identify acute canine leptospirosis at the point of care. Diagnostic sensitivity and specificity of the test were evaluated by comparison with the microscopic agglutination assay (MAT), using a positive cut-off titer of ≥800. Banked serum samples from dogs exhibiting clinical signs and suspected leptospirosis were selected to form three groups based on MAT titer: (1) positive (n = 50); (2) borderline (n = 35); and (3) negative (n = 50). Using an analysis to weight group sizes to reflect French prevalence, the sensitivity and specificity were 98% and 93.5% (88.2% unweighted), respectively. This test rapidly identifies cases of acute canine leptospirosis with high levels of sensitivity and specificity with no interference from previous vaccination. PMID:27110562

  3. A Rapid In-Clinic Test Detects Acute Leptospirosis in Dogs with High Sensitivity and Specificity

    PubMed Central

    Kodjo, Angeli; Calleja, Christophe; Loenser, Michael; Lin, Dan; Lizer, Joshua

    2016-01-01

    A rapid IgM-detection immunochromatographic test (WITNESS® Lepto, Zoetis) has recently become available to identify acute canine leptospirosis at the point of care. Diagnostic sensitivity and specificity of the test were evaluated by comparison with the microscopic agglutination assay (MAT), using a positive cut-off titer of ≥800. Banked serum samples from dogs exhibiting clinical signs and suspected leptospirosis were selected to form three groups based on MAT titer: (1) positive (n = 50); (2) borderline (n = 35); and (3) negative (n = 50). Using an analysis to weight group sizes to reflect French prevalence, the sensitivity and specificity were 98% and 93.5% (88.2% unweighted), respectively. This test rapidly identifies cases of acute canine leptospirosis with high levels of sensitivity and specificity with no interference from previous vaccination. PMID:27110562

  4. Scattering rates and specific heat jumps in high-Tc cuprates

    NASA Astrophysics Data System (ADS)

    Storey, James

    Inspired by recent ARPES and tunneling studies on high-Tc cuprates, we examine the effect of a pair-breaking term in the self-energy on the shape of the electronic specific heat jump. It is found that the observed specific heat jump can be described in terms of a superconducting gap, that persists above the observed Tc, in the presence of a strongly temperature dependent pair-breaking scattering rate. An increase in the scattering rate is found to explain the non-BCS-like suppression of the specific heat jump with magnetic field. A discussion of these results in the context of other properties such as the superfluid density and Raman spectra will also be presented. Supported by the Marsden Fund Council from Government funding, administered by the Royal Society of New Zealand.

  5. Hydrazide functionalized core-shell magnetic nanocomposites for highly specific enrichment of N-glycopeptides.

    PubMed

    Liu, Liting; Yu, Meng; Zhang, Ying; Wang, Changchun; Lu, Haojie

    2014-05-28

    In view of the biological significance of glycosylation for human health, profiling of glycoproteome from complex biological samples is highly inclined toward the discovery of disease biomarkers and clinical diagnosis. Nevertheless, because of the existence of glycopeptides at relatively low abundances compared with nonglycosylated peptides and glycan microheterogeneity, glycopeptides need to be highly selectively enriched from complex biological samples for mass spectrometry analysis. Herein, a new type of hydrazide functionalized core-shell magnetic nanocomposite has been synthesized for highly specific enrichment of N-glycopeptides. The nanocomposites with both the magnetic core and the polymer shell hanging high density of hydrazide groups were prepared by first functionalization of the magnetic core with polymethacrylic acid by reflux precipitation polymerization to obtain the Fe3O4@poly(methacrylic acid) (Fe3O4@PMAA) and then modification of the surface of Fe3O4@PMAA with adipic acid dihydrazide (ADH) to obtain Fe3O4@poly(methacrylic hydrazide) (Fe3O4@PMAH). The abundant hydrazide groups toward highly specific enrichment of glycopeptides and the magnetic core make it suitable for large-scale, high-throughput, and automated sample processing. In addition, the hydrophilic polymer surface can provide low nonspecific adsorption of other peptides. Compared to commercially available hydrazide resin, Fe3O4@PMAH improved more than 5 times the signal-to-noise ratio of standard glycopeptides. Finally, this nanocomposite was applied in the profiling of N-glycoproteome from the colorectal cancer patient serum. In total, 175 unique glycopeptides and 181 glycosylation sites corresponding to 63 unique glycoproteins were identified in three repeated experiments, with the specificities of the enriched glycopeptides and corresponding glycoproteins of 69.6% and 80.9%, respectively. Because of all these attractive features, we believe that this novel hydrazide functionalized

  6. Real-time algorithms for optimal CCD data reduction in high energy astronomy

    NASA Astrophysics Data System (ADS)

    Welch, S. J.

    2001-08-01

    This thesis presents novel and reusable algorithms and philosophies for the reduction of data produced by CCD detectors used for space astronomy. Some of the techniques described can be extended to other two-dimensional data sets, and all of them have relevance beyond the particular spacecraft on which they are currently being used. The author began the work described in this thesis in January 1995, looking at ways in which the data produced from a spectroscopic instrument on the XMM-Newton spacecraft could be reduced sufficiently to fit into the comparatively meagre telemetry bandwidth available to it. The work was also constrained by the use of a processor system with many fewer resources available than ideal, but chosen for its reliability and tolerance to radiation, both important factors in a ten-year mission. Chapter one introduces the need for spacecraft onboard data reduction, and the XMM-Newton spacecraft, and its instruments. Chapter two focuses on the principles of operation of CCDs, briefly considering the sources of noise that affect them in use. Chapter three examines the mechanics of the onboard software designed by the author, and arguments are made for trading data quality against data quantity. Chapter four describes the construction of a software, standalone instrument simulator to be able to quantify the quality of the existing onboard software, provide feedback to settings used, and analyse the impact of future modifications. Chapter five presents results from the testing of the onboard software and early data from the commissioning phase of XMM-Newton. The thesis concludes with some suggestions for further improvements to the onboard software, and hints at possible applications to other observational scenarios involving large data-sets.

  7. A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Quarteroni, Alfio

    2015-10-01

    In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.

  8. Polyaniline nanofibers with a high specific surface area and an improved pore structure for supercapacitors

    NASA Astrophysics Data System (ADS)

    Xu, Hailing; Li, Xingwei; Wang, Gengchao

    2015-10-01

    Polyaniline (PANI) with a high specific surface area and an improved pore structure (HSSA-PANI) has been prepared by using a facile method, treating PANI nanofibers with chloroform (CHCl3), and its structure, morphology and pore structure are investigated. The specific surface area and pore volume of HSSA-PANI are 817.3 m2 g-1 and 0.6 cm3 g-1, and those of PANI are 33.6 m2 g-1 and 0.2 cm3 g-1. As electrode materials, a large specific surface area and pore volume can provide high electroactive regions, accelerate the diffusion of ions, and mitigate the electrochemical degradation of active materials. Compared with PANI, the capacity retention rate of HSSA-PANI is 90% with a growth of current density from 5.0 to 30 A g-1, and that of PANI is 29%. At a current density of 30 A g-1, the specific capacitance of HSSA-PANI still reaches 278.3 F g-1, and that of PANI is 86.7 F g-1. At a current density of 5.0 A g-1, the capacitance retention of HSSA-PANI is 53.1% after 2000 cycles, and that of PANI electrode is only 28.1%.

  9. High Transferability of Homoeolog-Specific Markers between Bread Wheat and Newly Synthesized Hexaploid Wheat Lines.

    PubMed

    Zeng, Deying; Luo, Jiangtao; Li, Zenglin; Chen, Gang; Zhang, Lianquan; Ning, Shunzong; Yuan, Zhongwei; Zheng, Youliang; Hao, Ming; Liu, Dengcai

    2016-01-01

    Bread wheat (Triticum aestivum, 2n = 6x = 42, AABBDD) has a complex allohexaploid genome, which makes it difficult to differentiate between the homoeologous sequences and assign them to the chromosome A, B, or D subgenomes. The chromosome-based draft genome sequence of the 'Chinese Spring' common wheat cultivar enables the large-scale development of polymerase chain reaction (PCR)-based markers specific for homoeologs. Based on high-confidence 'Chinese Spring' genes with known functions, we developed 183 putative homoeolog-specific markers for chromosomes 4B and 7B. These markers were used in PCR assays for the 4B and 7B nullisomes and their euploid synthetic hexaploid wheat (SHW) line that was newly generated from a hybridization between Triticum turgidum (AABB) and the wild diploid species Aegilops tauschii (DD). Up to 64% of the markers for chromosomes 4B or 7B in the SHW background were confirmed to be homoeolog-specific. Thus, these markers were highly transferable between the 'Chinese Spring' bread wheat and SHW lines. Homoeolog-specific markers designed using genes with known functions may be useful for genetic investigations involving homoeologous chromosome tracking and homoeolog expression and interaction analyses. PMID:27611704

  10. A novel and highly specific phage endolysin cell wall binding domain for detection of Bacillus cereus.

    PubMed

    Kong, Minsuk; Sim, Jieun; Kang, Taejoon; Nguyen, Hoang Hiep; Park, Hyun Kyu; Chung, Bong Hyun; Ryu, Sangryeol

    2015-09-01

    Rapid, specific and sensitive detection of pathogenic bacteria is crucial for public health and safety. Bacillus cereus is harmful as it causes foodborne illness and a number of systemic and local infections. We report a novel phage endolysin cell wall-binding domain (CBD) for B. cereus and the development of a highly specific and sensitive surface plasmon resonance (SPR)-based B. cereus detection method using the CBD. The newly discovered CBD from endolysin of PBC1, a B. cereus-specific bacteriophage, provides high specificity and binding capacity to B. cereus. By using the CBD-modified SPR chips, B. cereus can be detected at the range of 10(5)-10(8) CFU/ml. More importantly, the detection limit can be improved to 10(2) CFU/ml by using a subtractive inhibition assay based on the pre-incubation of B. cereus and CBDs, removal of CBD-bound B. cereus, and SPR detection of the unbound CBDs. The present study suggests that the small and genetically engineered CBDs can be promising biological probes for B. cereus. We anticipate that the CBD-based SPR-sensing methods will be useful for the sensitive, selective, and rapid detection of B. cereus. PMID:26043681

  11. Highly sensitive and specific colorimetric detection of cancer cells via dual-aptamer target binding strategy.

    PubMed

    Wang, Kun; Fan, Daoqing; Liu, Yaqing; Wang, Erkang

    2015-11-15

    Simple, rapid, sensitive and specific detection of cancer cells is of great importance for early and accurate cancer diagnostics and therapy. By coupling nanotechnology and dual-aptamer target binding strategies, we developed a colorimetric assay for visually detecting cancer cells with high sensitivity and specificity. The nanotechnology including high catalytic activity of PtAuNP and magnetic separation & concentration plays a vital role on the signal amplification and improvement of detection sensitivity. The color change caused by small amount of target cancer cells (10 cells/mL) can be clearly distinguished by naked eyes. The dual-aptamer target binding strategy guarantees the detection specificity that large amount of non-cancer cells and different cancer cells (10(4) cells/mL) cannot cause obvious color change. A detection limit as low as 10 cells/mL with detection linear range from 10 to 10(5) cells/mL was reached according to the experimental detections in phosphate buffer solution as well as serum sample. The developed enzyme-free and cost effective colorimetric assay is simple and no need of instrument while still provides excellent sensitivity, specificity and repeatability, having potential application on point-of-care cancer diagnosis. PMID:26042871

  12. Precise detection of L. monocytogenes hitting its highly conserved region possessing several specific antibody binding sites.

    PubMed

    Jahangiri, Abolfazl; Rasooli, Iraj; Reza Rahbar, Mohammad; Khalili, Saeed; Amani, Jafar; Ahmadi Zanoos, Kobra

    2012-07-21

    Listeria monocytogenes, a facultative intracellular fast-growing Gram-positive food-borne pathogen, can infect immunocompromised individuals leading to meningitis, meningoencephalitis and septicaemias. From the pool of virulence factors of the organism, ActA, a membrane protein, has a critical role in the life cycle of L. monocytogenes. High mortality rate of listeriosis necessitates a sensitive and rapid diagnostic test for precise identification of L. monocytogenes. We used bioinformatic tools to locate a specific conserved region of ActA for designing and developing an antibody-antigen based diagnostic test for the detection of L. monocytogenes. A number of databases were looked for ActA related sequences. Sequences were analyzed with several online software to find an appropriate region for our purpose. ActA protein was found specific to Listeria species with no homologs in other organisms. We finally introduced a highly conserved region within ActA sequence that possess several antibody binding sites specific to L. monocytogenes. This protein sequence can serve as an antigen for designing a relatively cheap, sensitive, and specific diagnostic test for detection of L. monocytogenes. PMID:22575546

  13. Selective culling of high avidity antigen-specific CD4+ T cells after virulent Salmonella infection

    PubMed Central

    Ertelt, James M; Johanns, Tanner M; Mysz, Margaret A; Nanton, Minelva R; Rowe, Jared H; Aguilera, Marijo N; Way, Sing Sing

    2011-01-01

    Typhoid fever is a persistent infection caused by host-adapted Salmonella strains adept at circumventing immune-mediated host defences. Given the importance of T cells in protection, the culling of activated CD4+ T cells after primary infection has been proposed as a potential immune evasion strategy used by this pathogen. We demonstrate that the purging of activated antigen-specific CD4+ T cells after virulent Salmonella infection requires SPI-2 encoded virulence determinants, and is not restricted only to cells with specificity to Salmonella-expressed antigens, but extends to CD4+ T cells primed to expand by co-infection with recombinant Listeria monocytogenes. Unexpectedly, however, the loss of activated CD4+ T cells during Salmonella infection demonstrated using a monoclonal population of adoptively transferred CD4+ T cells was not reproduced among the endogenous repertoire of antigen-specific CD4+ T cells identified with MHC class II tetramer. Analysis of T-cell receptor variable segment usage revealed the selective loss and reciprocal enrichment of defined CD4+ T-cell subsets after Salmonella co-infection that is associated with the purging of antigen-specific cells with the highest intensity of tetramer staining. Hence, virulent Salmonella triggers the selective culling of high avidity activated CD4+ T-cell subsets, which re-shapes the repertoire of antigen-specific T cells that persist later after infection. PMID:22044420

  14. Specific binding of eukaryotic ORC to DNA replication origins depends on highly conserved basic residues.

    PubMed

    Kawakami, Hironori; Ohashi, Eiji; Kanamoto, Shota; Tsurimoto, Toshiki; Katayama, Tsutomu

    2015-01-01

    In eukaryotes, the origin recognition complex (ORC) heterohexamer preferentially binds replication origins to trigger initiation of DNA replication. Crystallographic studies using eubacterial and archaeal ORC orthologs suggested that eukaryotic ORC may bind to origin DNA via putative winged-helix DNA-binding domains and AAA+ ATPase domains. However, the mechanisms how eukaryotic ORC recognizes origin DNA remain elusive. Here, we show in budding yeast that Lys-362 and Arg-367 residues of the largest subunit (Orc1), both outside the aforementioned domains, are crucial for specific binding of ORC to origin DNA. These basic residues, which reside in a putative disordered domain, were dispensable for interaction with ATP and non-specific DNA sequences, suggesting a specific role in recognition. Consistent with this, both residues were required for origin binding of Orc1 in vivo. A truncated Orc1 polypeptide containing these residues solely recognizes ARS sequence with low affinity and Arg-367 residue stimulates sequence specific binding mode of the polypeptide. Lys-362 and Arg-367 residues of Orc1 are highly conserved among eukaryotic ORCs, but not in eubacterial and archaeal orthologs, suggesting a eukaryote-specific mechanism underlying recognition of replication origins by ORC. PMID:26456755

  15. Specific binding of eukaryotic ORC to DNA replication origins depends on highly conserved basic residues

    PubMed Central

    Kawakami, Hironori; Ohashi, Eiji; Kanamoto, Shota; Tsurimoto, Toshiki; Katayama, Tsutomu

    2015-01-01

    In eukaryotes, the origin recognition complex (ORC) heterohexamer preferentially binds replication origins to trigger initiation of DNA replication. Crystallographic studies using eubacterial and archaeal ORC orthologs suggested that eukaryotic ORC may bind to origin DNA via putative winged-helix DNA-binding domains and AAA+ ATPase domains. However, the mechanisms how eukaryotic ORC recognizes origin DNA remain elusive. Here, we show in budding yeast that Lys-362 and Arg-367 residues of the largest subunit (Orc1), both outside the aforementioned domains, are crucial for specific binding of ORC to origin DNA. These basic residues, which reside in a putative disordered domain, were dispensable for interaction with ATP and non-specific DNA sequences, suggesting a specific role in recognition. Consistent with this, both residues were required for origin binding of Orc1 in vivo. A truncated Orc1 polypeptide containing these residues solely recognizes ARS sequence with low affinity and Arg-367 residue stimulates sequence specific binding mode of the polypeptide. Lys-362 and Arg-367 residues of Orc1 are highly conserved among eukaryotic ORCs, but not in eubacterial and archaeal orthologs, suggesting a eukaryote-specific mechanism underlying recognition of replication origins by ORC. PMID:26456755

  16. Highly specific olfactory receptor neurons for types of amino acids in the channel catfish.

    PubMed

    Nikonov, Alexander A; Caprio, John

    2007-10-01

    Odorant specificity to l-alpha-amino acids was determined electrophysiologically for 93 single catfish olfactory receptor neurons (ORNs) selected for their narrow excitatory molecular response range (EMRR) to only one type of amino acid (i.e., Group I units). These units were excited by either a basic amino acid, a neutral amino acid with a long side chain, or a neutral amino acid with a short side chain when tested at 10(-7) to 10(-5) M. Stimulus-induced inhibition, likely for contrast enhancement, was primarily observed in response to the types of amino acid stimuli different from that which activated a specific ORN. The high specificity of single Group I ORNs to type of amino acid was also previously observed for single Group I neurons in both the olfactory bulb and forebrain of the same species. These results indicate that for Group I neurons olfactory information concerning specific types of amino acids is processed from receptor neurons through mitral cells of the olfactory bulb to higher forebrain neurons without significant alteration in unit odorant specificity. PMID:17686913

  17. Facile synthesis of boronic acid-functionalized magnetic carbon nanotubes for highly specific enrichment of glycopeptides

    NASA Astrophysics Data System (ADS)

    Ma, Rongna; Hu, Junjie; Cai, Zongwei; Ju, Huangxian

    2014-02-01

    A stepwise strategy was developed to synthesize boronic acid functionalized magnetic carbon nanotubes (MCNTs) for highly specific enrichment of glycopeptides. The MCNTs were synthesized by a solvothermal reaction of Fe3+ loaded on the acid-treated CNTs and modified with 1-pyrenebutanoic acid N-hydroxysuccinimidyl ester (PASE) to bind aminophenylboronic acid (APBA) via an amide reaction. The introduction of PASE could bridge the MCNT and APBA, suppress the nonspecific adsorption and reduce the steric hindrance among the bound molecules. Due to the excellent structure of the MCNTs, the functionalization of PASE and then APBA on MCNTs was quite simple, specific and effective. The glycopeptides enrichment and separation with a magnetic field could be achieved by their reversible covalent binding with the boronic group of APBA-MCNTs. The exceptionally large specific surface area and the high density of boronic acid groups of APBA-MCNTs resulted in rapid and highly efficient enrichment of glycopeptides, even in the presence of large amounts of interfering nonglycopeptides. The functional MCNTs possessed high selectivity for enrichment of 21 glycopeptides from the digest of horseradish peroxidase demonstrated by MALDI-TOF mass spectrometric analysis showing more glycopeptides detected than the usual 9 glycopeptides with commercially available APBA-agarose. The proposed system showed better specificity for glycopeptides even in the presence of non-glycopeptides with 50 times higher concentration. The boronic acid functionalized MCNTs provide a promising selective enrichment platform for precise glycoproteomic analysis.A stepwise strategy was developed to synthesize boronic acid functionalized magnetic carbon nanotubes (MCNTs) for highly specific enrichment of glycopeptides. The MCNTs were synthesized by a solvothermal reaction of Fe3+ loaded on the acid-treated CNTs and modified with 1-pyrenebutanoic acid N-hydroxysuccinimidyl ester (PASE) to bind aminophenylboronic acid

  18. Development of a highly sensitive and specific immunoassay for enrofloxacin based on heterologous coating haptens.

    PubMed

    Wang, Zhanhui; Zhang, Huiyan; Ni, Hengjia; Zhang, Suxia; Shen, Jianzhong

    2014-04-11

    In the paper, an enzyme-linked immunosorbent immunoassay (ELISA) for detection of enrofloxacin was described using one new derivative of enrofloxacin as coating hapten, resulting in surprisingly high sensitivity and specificity. Incorporation of aminobutyric acid (AA) in the new derivative of enrofloxacin had decreased the IC50 of the ELISA for enrofloxacin from 1.3 μg L(-1) to as low as 0.07 μg L(-1). The assay showed neglect cross-reactivity for other fluoroquinolones but ofloxacin (8.23%), marbofloxacin (8.97%) and pefloxacin (7.29%). Analysis of enrofloxacin fortified chicken muscle showed average recoveries from 81 to 115%. The high sensitivity and specificity of the assay makes it a suitable screening method for the determination of low levels of enrofloxacin in chicken muscle without clean-up step. PMID:24745749

  19. Development of a high specific 1.5 to 5 kW thermal arcjet

    NASA Technical Reports Server (NTRS)

    Riehle, M.; Glocker, B.; Auweter-Kurtz, M.; Kurtz, H.

    1993-01-01

    A research and development project on the experimental study of a 1.5-5 kW thermal arcjet thruster was started in 1992 at the IRS. Two radiation cooled thrusters were designed, constructed, and adapted to the test facilities, one at each end of the intended power range. These thrusters are currently subjected to an intensive test program with main emphasis on the exploration of thruster performance and thruster behavior at high specific enthalpy and thus high specific impulse. Propelled by simulated hydrazine and ammonia, the thruster's electrode configuration such as constrictor diameter and cathode gap was varied in order to investigate their influence and to optimize these parameters. In addition, test runs with pure hydrogen were performed for both thrusters.

  20. High specificity of a novel Zika virus ELISA in European patients after exposure to different flaviviruses.

    PubMed

    Huzly, Daniela; Hanselmann, Ingeborg; Schmidt-Chanasit, Jonas; Panning, Marcus

    2016-04-21

    The current Zika virus (ZIKV) epidemic in the Americas caused an increase in diagnostic requests in European countries. Here we demonstrate high specificity of the Euroimmun anti-ZIKV IgG and IgM ELISA tests using putative cross-reacting sera of European patients with antibodies against tick-borne encephalitis virus, dengue virus, yellow fever virus and hepatitis C virus. This test may aid in counselling European travellers returning from regions where ZIKV is endemic. PMID:27126052

  1. Record-high specific conductance and temperature in San Francisco Bay during water year 2014

    USGS Publications Warehouse

    Downing-Kunz, Maureen; Work, Paul; Shellenbarger, Gregory

    2015-01-01

    In water year (WY) 2014 (October 1, 2013, through September 30, 2014), our network measured record-high values of specific conductance and water temperature at several stations during a period of very little freshwater inflow from the Sacramento–San Joaquin Delta and other tributaries because of severe drought conditions in California. This report summarizes our observations for WY2014 and compares them to previous years that had different levels of freshwater inflow.

  2. An Epigenetic Mechanism of High Gdnf Transcription in Glioma Cells Revealed by Specific Sequence Methylation.

    PubMed

    Zhang, Bao-Le; Liu, Jie; Lei, Yu; Xiong, Ye; Li, Heng; Lin, Xiaoqian; Yao, Rui-Qin; Gao, Dian-Shuai

    2016-09-01

    Glioma cells express high levels of GDNF. When investigating its transcriptional regulation mechanism, we observed increased or decreased methylation of different cis-acting elements in the gdnf promoter II. However, it is difficult to determine the contributions of methylation changes of each cis-acting element to the abnormally high transcription of gdnf gene. To elucidate the contributions of methylation changes of specific cis-acting elements to the regulation of gdnf transcription, we combined gene site-directed mutation, molecular cloning, and dual luciferase assay to develop the "specific sequence methylation followed by plasmid recircularization" method to alter methylation levels of specific cis-acting elements in the gdnf promoter in living cells and assess gene transcriptional activity. This method successfully introduced artificial changes in the methylation of different cis-acting elements in the gdnf promoter II. Moreover, compared with unmethylated gdnf promoter II, both silencer II hypermethylation plus enhancer II unmethylation and hypermethylation of the entire promoter II (containing enhancer II and silencer II) significantly enhanced gdnf transcriptional activity (P < 0.05), and no significant difference was noted between these two hypermethylation patterns (P > 0.05). Enhancer II hypermethylation plus silencer II unmethylation did not significantly affect gene transcription (P > 0.05). Furthermore, we found significantly increased DNA methylation in the silencer II of the gdnf gene in high-grade astroglioma cells with abnormally high gdnf gene expression (P < 0.01). The absence of silencer II significantly increased gdnf promoter II activity in U251 cells (P < 0.01). In conclusion, our specific sequence methylation followed by plasmid recircularization method successfully altered the methylation levels of a specific cis-acting element in a gene promoter in living cells. This method allows in-depth investigation of the impact

  3. Specification of optical components for a high average-power laser environment

    SciTech Connect

    Taylor, J.R.; Chow, R.; Rinmdahl, K.A.; Willis, J.B.; Wong, J.N.

    1997-06-25

    Optical component specifications for the high-average-power lasers and transport system used in the Atomic Vapor Laser Isotope Separation (AVLIS) plant must address demanding system performance requirements. The need for high performance optics has to be balanced against the practical desire to reduce the supply risks of cost and schedule. This is addressed in optical system design, careful planning with the optical industry, demonstration of plant quality parts, qualification of optical suppliers and processes, comprehensive procedures for evaluation and test, and a plan for corrective action.

  4. Production of 191Pt radiotracer with high specific activity for the development of preconcentration procedures

    NASA Astrophysics Data System (ADS)

    Parent, M.; Strijckmans, K.; Cornelis, R.; Dewaele, J.; Dams, R.

    1994-04-01

    A radiotracer of Pt with suitable nuclear characteristics and high specific activity (i.e. activity to mass ratio) is a powerful tool when developing preconcentration methods for the determination of base-line levels of Pt in e.g. environmental and biological samples. Two methods were developed for the production of 191Pt with high specific activity and radionuclidic purity: (1) via the 190Pt(n, γ) 191Pt reaction by neutron irradiation of enriched Pt in a nuclear reactor at high neutron fluence rate and (2) via the 191Ir(p, n) 191Pt reaction by proton irradiation of natural Ir with a cyclotron, at an experimentally optimized proton energy. For the latter method it was necessary to separate Pt from the Ir matrix. For that reason either liquid-liquid extraction with dithizone or adsorption chromatography were used. The yields, the specific activities and the radionuclidic purities were experimentally determined as a function of the proton energy and compared to the former method. The half-life of 191Pt was accurately determined to be 2.802 ± 0.025 d.

  5. High-Resolution CRISPR Screens Reveal Fitness Genes and Genotype-Specific Cancer Liabilities.

    PubMed

    Hart, Traver; Chandrashekhar, Megha; Aregger, Michael; Steinhart, Zachary; Brown, Kevin R; MacLeod, Graham; Mis, Monika; Zimmermann, Michal; Fradet-Turcotte, Amelie; Sun, Song; Mero, Patricia; Dirks, Peter; Sidhu, Sachdev; Roth, Frederick P; Rissland, Olivia S; Durocher, Daniel; Angers, Stephane; Moffat, Jason

    2015-12-01

    The ability to perturb genes in human cells is crucial for elucidating gene function and holds great potential for finding therapeutic targets for diseases such as cancer. To extend the catalog of human core and context-dependent fitness genes, we have developed a high-complexity second-generation genome-scale CRISPR-Cas9 gRNA library and applied it to fitness screens in five human cell lines. Using an improved Bayesian analytical approach, we consistently discover 5-fold more fitness genes than were previously observed. We present a list of 1,580 human core fitness genes and describe their general properties. Moreover, we demonstrate that context-dependent fitness genes accurately recapitulate pathway-specific genetic vulnerabilities induced by known oncogenes and reveal cell-type-specific dependencies for specific receptor tyrosine kinases, even in oncogenic KRAS backgrounds. Thus, rigorous identification of human cell line fitness genes using a high-complexity CRISPR-Cas9 library affords a high-resolution view of the genetic vulnerabilities of a cell. PMID:26627737

  6. Highly specific SNP detection using 2D graphene electronics and DNA strand displacement

    PubMed Central

    Hwang, Michael T.; Landon, Preston B.; Lee, Joon; Choi, Duyoung; Mo, Alexander H.; Glinsky, Gennadi; Lal, Ratnesh

    2016-01-01

    Single-nucleotide polymorphisms (SNPs) in a gene sequence are markers for a variety of human diseases. Detection of SNPs with high specificity and sensitivity is essential for effective practical implementation of personalized medicine. Current DNA sequencing, including SNP detection, primarily uses enzyme-based methods or fluorophore-labeled assays that are time-consuming, need laboratory-scale settings, and are expensive. Previously reported electrical charge-based SNP detectors have insufficient specificity and accuracy, limiting their effectiveness. Here, we demonstrate the use of a DNA strand displacement-based probe on a graphene field effect transistor (FET) for high-specificity, single-nucleotide mismatch detection. The single mismatch was detected by measuring strand displacement-induced resistance (and hence current) change and Dirac point shift in a graphene FET. SNP detection in large double-helix DNA strands (e.g., 47 nt) minimize false-positive results. Our electrical sensor-based SNP detection technology, without labeling and without apparent cross-hybridization artifacts, would allow fast, sensitive, and portable SNP detection with single-nucleotide resolution. The technology will have a wide range of applications in digital and implantable biosensors and high-throughput DNA genotyping, with transformative implications for personalized medicine. PMID:27298347

  7. Highly specific SNP detection using 2D graphene electronics and DNA strand displacement.

    PubMed

    Hwang, Michael T; Landon, Preston B; Lee, Joon; Choi, Duyoung; Mo, Alexander H; Glinsky, Gennadi; Lal, Ratnesh

    2016-06-28

    Single-nucleotide polymorphisms (SNPs) in a gene sequence are markers for a variety of human diseases. Detection of SNPs with high specificity and sensitivity is essential for effective practical implementation of personalized medicine. Current DNA sequencing, including SNP detection, primarily uses enzyme-based methods or fluorophore-labeled assays that are time-consuming, need laboratory-scale settings, and are expensive. Previously reported electrical charge-based SNP detectors have insufficient specificity and accuracy, limiting their effectiveness. Here, we demonstrate the use of a DNA strand displacement-based probe on a graphene field effect transistor (FET) for high-specificity, single-nucleotide mismatch detection. The single mismatch was detected by measuring strand displacement-induced resistance (and hence current) change and Dirac point shift in a graphene FET. SNP detection in large double-helix DNA strands (e.g., 47 nt) minimize false-positive results. Our electrical sensor-based SNP detection technology, without labeling and without apparent cross-hybridization artifacts, would allow fast, sensitive, and portable SNP detection with single-nucleotide resolution. The technology will have a wide range of applications in digital and implantable biosensors and high-throughput DNA genotyping, with transformative implications for personalized medicine. PMID:27298347

  8. Fabrication of high specificity hollow mesoporous silica nanoparticles assisted by Eudragit for targeted drug delivery.

    PubMed

    She, Xiaodong; Chen, Lijue; Velleman, Leonora; Li, Chengpeng; Zhu, Haijin; He, Canzhong; Wang, Tao; Shigdar, Sarah; Duan, Wei; Kong, Lingxue

    2015-05-01

    Hollow mesoporous silica nanoparticles (HMSNs) are one of the most promising carriers for effective drug delivery due to their large surface area, high volume for drug loading and excellent biocompatibility. However, the non-ionic surfactant templated HMSNs often have a broad size distribution and a defective mesoporous structure because of the difficulties involved in controlling the formation and organization of micelles for the growth of silica framework. In this paper, a novel "Eudragit assisted" strategy has been developed to fabricate HMSNs by utilising the Eudragit nanoparticles as cores and to assist in the self-assembly of micelle organisation. Highly dispersed mesoporous silica spheres with intact hollow interiors and through pores on the shell were fabricated. The HMSNs have a high surface area (670 m(2)/g), small diameter (120 nm) and uniform pore size (2.5 nm) that facilitated the effective encapsulation of 5-fluorouracil within HMSNs, achieving a high loading capacity of 194.5 mg(5-FU)/g(HMSNs). The HMSNs were non-cytotoxic to colorectal cancer cells SW480 and can be bioconjugated with Epidermal Growth Factor (EGF) for efficient and specific cell internalization. The high specificity and excellent targeting performance of EGF grafted HMSNs have demonstrated that they can become potential intracellular drug delivery vehicles for colorectal cancers via EGF-EGFR interaction. PMID:25617610

  9. Estimation of sediment transport with an in-situ acoustic retrieval algorithm in the high-turbidity Changjiang Estuary, China

    NASA Astrophysics Data System (ADS)

    Ge, Jian-zhong; Ding, Ping-xing; Li, Cheng; Fan, Zhong-ya; Shen, Fang; Kong, Ya-zhen

    2015-12-01

    A comprehensive acoustic retrieval algorithm to investigate suspended sediment is presented with the combined validations of Acoustic Doppler Current Profiler (ADCP) and Optical Backscattering Sensor (OBS) monitoring along seven cross-channel sections in the high-turbidity North Passage of the Changjiang Estuary, China. The realistic water conditions, horizontal and vertical salinities, and grain size of the suspended sediment are considered in the retrieval algorithm. Relations between net volume scattering of sound attenuation ( S v ) due to sediments and ADCP echo intensity ( E) were obtained with reasonable accuracy after applying the linear regression method. In the river mouth, an intensive vertical stratification and horizontal inhomogeneity were found, with a higher concentration of sediment in the North Passage and a lower concentration in the North Channel and South Passage. Additionally, The North Passage is characterized by higher sediment concentration in the middle region and lower concentration in the entrance and outlet areas. The maximum sediment flux rate, occurred in the middle region, could reach 6.3×105 and 1.5×105 t/h during the spring and neap tide, respectively. Retrieved sediment fluxes in the middle region are significantly larger than that in the upstream and downstream region. This strong sediment imbalance along the main channel indicates potential secondary sediment supply from southern Jiuduansha Shoals.

  10. Brachytherapy boost and cancer-specific mortality in favorable high-risk versus other high-risk prostate cancer

    PubMed Central

    Muralidhar, Vinayak; Xiang, Michael; Orio, Peter F.; Martin, Neil E.; Beard, Clair J.; Feng, Felix Y.; Hoffman, Karen E.

    2016-01-01

    Purpose Recent retrospective data suggest that brachytherapy (BT) boost may confer a cancer-specific survival benefit in radiation-managed high-risk prostate cancer. We sought to determine whether this survival benefit would extend to the recently defined favorable high-risk subgroup of prostate cancer patients (T1c, Gleason 4 + 4 = 8, PSA < 10 ng/ml or T1c, Gleason 6, PSA > 20 ng/ml). Material and methods We identified 45,078 patients in the Surveillance, Epidemiology, and End Results database with cT1c-T3aN0M0 intermediate- to high-risk prostate cancer diagnosed 2004-2011 treated with external beam radiation therapy (EBRT) only or EBRT plus BT. We used multivariable competing risks regression to determine differences in the rate of prostate cancer-specific mortality (PCSM) after EBRT + BT or EBRT alone in patients with intermediate-risk, favorable high-risk, or other high-risk disease after adjusting for demographic and clinical factors. Results EBRT + BT was not associated with an improvement in 5-year PCSM compared to EBRT alone among patients with favorable high-risk disease (1.6% vs. 1.8%; adjusted hazard ratio [AHR]: 0.56; 95% confidence interval [CI]: 0.21-1.52, p = 0.258), and intermediate-risk disease (0.8% vs. 1.0%, AHR: 0.83, 95% CI: 0.59-1.16, p = 0.270). Others with high-risk disease had significantly lower 5-year PCSM when treated with EBRT + BT compared with EBRT alone (3.9% vs. 5.3%; AHR: 0.73; 95% CI: 0.55-0.95; p = 0.022). Conclusions Brachytherapy boost is associated with a decreased rate of PCSM in some men with high-risk prostate cancer but not among patients with favorable high-risk disease. Our results suggest that the recently-defined “favorable high-risk” category may be used to personalize therapy for men with high-risk disease. PMID:26985191

  11. General Anthropometric and Specific Physical Fitness Profile of High-Level Junior Water Polo Players

    PubMed Central

    Kondrič, Miran; Uljević, Ognjen; Gabrilo, Goran; Kontić, Dean; Sekulić, Damir

    2012-01-01

    The aim of this study was to investigate the status and playing position differences in anthropometric measures and specific physical fitness in high-level junior water polo players. The sample of subjects comprised 110 water polo players (17 to 18 years of age), including one of the world’s best national junior teams for 2010. The subjects were divided according to their playing positions into: Centers (N = 16), Wings (N = 28), perimeter players (Drivers; N = 25), Points (N = 19), and Goalkeepers (N = 18). The variables included body height, body weight, body mass index, arm span, triceps- and subscapular-skinfold. Specific physical fitness tests comprised: four swimming tests, namely: 25m, 100m, 400m and a specific anaerobic 4x50m test (average result achieved in four 50m sprints with a 30 sec pause), vertical body jump (JUMP; maximal vertical jump from the water starting from a water polo defensive position) and a dynamometric power achieved in front crawl swimming (DYN). ANOVA with post-hoc comparison revealed significant differences between positions for most of the anthropometrics, noting that the Centers were the heaviest and had the highest BMI and subscapular skinfold. The Points achieved the best results in most of the swimming capacities and JUMP test. No significant group differences were found for the 100m and 4x50m tests. The Goalkeepers achieved the lowest results for DYN. Given the representativeness of the sample of subjects, the results of this study allow specific insights into the physical fitness and anthropometric features of high-level junior water polo players and allow coaches to design a specific training program aimed at achieving the physical fitness results presented for each playing position. PMID:23487152

  12. Identification of Fluorescent Compounds with Non-Specific Binding Property via High Throughput Live Cell Microscopy

    PubMed Central

    Nath, Sangeeta; Spencer, Virginia A.; Han, Ju; Chang, Hang; Zhang, Kai; Fontenay, Gerald V.; Anderson, Charles; Hyman, Joel M.; Nilsen-Hamilton, Marit; Chang, Young-Tae; Parvin, Bahram

    2012-01-01

    Introduction Compounds exhibiting low non-specific intracellular binding or non-stickiness are concomitant with rapid clearing and in high demand for live-cell imaging assays because they allow for intracellular receptor localization with a high signal/noise ratio. The non-stickiness property is particularly important for imaging intracellular receptors due to the equilibria involved. Method Three mammalian cell lines with diverse genetic backgrounds were used to screen a combinatorial fluorescence library via high throughput live cell microscopy for potential ligands with high in- and out-flux properties. The binding properties of ligands identified from the first screen were subsequently validated on plant root hair. A correlative analysis was then performed between each ligand and its corresponding physiochemical and structural properties. Results The non-stickiness property of each ligand was quantified as a function of the temporal uptake and retention on a cell-by-cell basis. Our data shows that (i) mammalian systems can serve as a pre-screening tool for complex plant species that are not amenable to high-throughput imaging; (ii) retention and spatial localization of chemical compounds vary within and between each cell line; and (iii) the structural similarities of compounds can infer their non-specific binding properties. Conclusion We have validated a protocol for identifying chemical compounds with non-specific binding properties that is testable across diverse species. Further analysis reveals an overlap between the non-stickiness property and the structural similarity of compounds. The net result is a more robust screening assay for identifying desirable ligands that can be used to monitor intracellular localization. Several new applications of the screening protocol and results are also presented. PMID:22242152

  13. Preliminary results from an airdata enhancement algorithm with application to high-angle-of-attack flight

    NASA Technical Reports Server (NTRS)

    Moes, Timothy R.; Whitmore, Stephen A.

    1991-01-01

    A technique was developed to improve the fidelity of airdata measurements during dynamic maneuvering. This technique is particularly useful for airdata measured during flight at high angular rates and high angles of attack. To support this research, flight tests using the F-18 high alpha research vehicle (HARV) were conducted at NASA Ames Research Center, Dryden Flight Research Facility. A Kalman filter was used to combine information from research airdata, linear accelerometers, angular rate gyros, and attitude gyros to determine better estimates of airdata quantities such as angle of attack, angle of sideslip, airspeed, and altitude. The state and observation equations used by the Kalman filter are briefly developed and it is shown how the state and measurement covariance matrices were determined from flight data. Flight data are used to show the results of the technique and these results are compared to an independent measurement source. This technique is applicable to both postflight and real-time processing of data.

  14. Understanding Strongly Correlated Materials thru Theory Algorithms and High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kotliar, Gabriel

    A long standing challenge in condensed matter physics is the prediction of physical properties of materials starting from first principles. In the past two decades, substantial advances have taken place in this area. The combination of modern implementations of electronic structure methods in conjunction with Dynamical Mean Field Theory (DMFT), in combination with advanced impurity solvers, modern computer codes and massively parallel computers, are giving new system specific insights into the properties of strongly correlated electron systems enable the calculations of experimentally measurable correlation functions. The predictions of this ''theoretical spectroscopy'' can be directly compared with experimental results. In this talk I will briefly outline the state of the art of the methodology, and illustrate it with an example the origin of the solid state anomalies of elemental Plutonium.

  15. Selective paired ion contrast analysis: a novel algorithm for analyzing postprocessed LC-MS metabolomics data possessing high experimental noise.

    PubMed

    Mak, Tytus D; Laiakis, Evagelia C; Goudarzi, Maryam; Fornace, Albert J

    2015-03-17

    One of the consequences in analyzing biological data from noisy sources, such as human subjects, is the sheer variability of experimentally irrelevant factors that cannot be controlled for. This holds true especially in metabolomics, the global study of small molecules in a particular system. While metabolomics can offer deep quantitative insight into the metabolome via easy-to-acquire biofluid samples such as urine and blood, the aforementioned confounding factors can easily overwhelm attempts to extract relevant information. This can mar potentially crucial applications such as biomarker discovery. As such, a new algorithm, called Selective Paired Ion Contrast (SPICA), has been developed with the intent of extracting potentially biologically relevant information from the noisiest of metabolomic data sets. The basic idea of SPICA is built upon redefining the fundamental unit of statistical analysis. Whereas the vast majority of algorithms analyze metabolomics data on a single-ion basis, SPICA relies on analyzing ion-pairs. A standard metabolomic data set is reinterpreted by exhaustively considering all possible ion-pair combinations. Statistical comparisons between sample groups are made only by analyzing the differences in these pairs, which may be crucial in situations where no single metabolite can be used for normalization. With SPICA, human urine data sets from patients undergoing total body irradiation (TBI) and from a colorectal cancer (CRC) relapse study were analyzed in a statistically rigorous manner not possible with conventional methods. In the TBI study, 3530 statistically significant ion-pairs were identified, from which numerous putative radiation specific metabolite-pair biomarkers that mapped to potentially perturbed metabolic pathways were elucidated. In the CRC study, SPICA identified 6461 statistically significant ion-pairs, several of which putatively mapped to folic acid biosynthesis, a key pathway in colorectal cancer. Utilizing support

  16. A depth video processing algorithm for high encoding and rendering performance

    NASA Astrophysics Data System (ADS)

    Guo, Mingsong; Chen, Fen; Sheng, Chengkai; Peng, Zongju; Jiang, Gangyi

    2014-11-01

    In free viewpoint video system, the color and the corresponding depth video are utilized to synthesize the virtual views by depth image based rendering (DIBR) technique. Hence, high quality of depth videos is a prerequisite for high quality of virtual views. However, depth variation, caused by scene variance and limited depth capturing technologies, may increase the encoding bitrate of depth videos and decrease the quality of virtual views. To tackle these problems, a depth preprocess method based on smoothing the texture and abrupt changes of depth videos is proposed to increase the accuracy of depth videos in this paper. Firstly, a bilateral filter is adopted to smooth the whole depth videos and protect the edge of depth videos at the same time. Secondly, abrupt variation is detected by a threshold calculated according to the camera parameter of each video sequence. Holes of virtual views occur when the depth values of left view change obviously from low to high in horizontal direction or the depth values of right view change obviously from high to low. So for the left view, depth value difference in left side gradually becomes smaller where it is greater than the thresholds. And then, in right side of right view is processed likewise. Experimental results show that the proposed method can averagely reduce the encoding bitrate by 25% while the quality of the synthesized virtual views can be improve by 0.39dB on average compared with using original depth videos. The subjective quality improvement is also achieved.

  17. ET mapping with METRIC algorithm using airborne high resolution multispectral remote sensing imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Routine and accurate estimates of spatially distributed evapotranspiration (ET) are essential for managing water resources particularly in irrigated regions such as the U.S. Southern High Plains. For instance, ET maps would assist in the improvement of the Ogallala Aquifer ground water management. M...

  18. Development of high-specificity antibodies against renal urate transporters using genetic immunization.

    PubMed

    Xu, Guoshuang; Chen, Xiangmei; Wu, Di; Shi, Suozhu; Wang, Jianzhong; Ding, Rui; Hong, Quan; Feng, Zhe; Lin, Shupeng; Lu, Yang

    2006-11-30

    Recently three proteins, playing central roles in the bidirectional transport of urate in renal proximal tubules, were identified: two members of the organic anion transporter (OAT) family, OAT1 and OAT3, and a protein that designated renal urate-anion exchanger (URAT1). Antibodies against these transporters are very important for investigating their expressions and functions. With the cytokine gene as a molecular adjuvant, genetic immunization-based antibody production offers several advantages including high specificity and high recognition to the native protein compared with current methods. We fused high antigenicity fragments of the three transporters to the plasmids pBQAP-TT containing T-cell epitopes and flanking regions from tetanus toxin, respectively. Gene gun immunization with these recombinant plasmids and two other adjuvant plasmids, which express granulocyte/ macrophage colony-stimulating factor and FMS-like tyrosine kinase 3 ligand, induced high level immunoglobulin G antibodies, respectively. The native corresponding proteins of URAT1, OAT1 and OAT3, in human kidney can be recognized by their specific antibodies, respectively, with Western blot analysis and immunohistochemistry. Besides, URAT1 expression in Xenopus oocytes can also be recognized by its corresponding antibody with immuno-fluorescence. The successful production of the antibodies has provided an important tool for the study of UA transporters. PMID:17129404

  19. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  20. Maltodextrin-based imaging probes detect bacteria in vivo with high sensitivity and specificity

    NASA Astrophysics Data System (ADS)

    Ning, Xinghai; Lee, Seungjun; Wang, Zhirui; Kim, Dongin; Stubblefield, Bryan; Gilbert, Eric; Murthy, Niren

    2011-08-01

    The diagnosis of bacterial infections remains a major challenge in medicine. Although numerous contrast agents have been developed to image bacteria, their clinical impact has been minimal because they are unable to detect small numbers of bacteria in vivo, and cannot distinguish infections from other pathologies such as cancer and inflammation. Here, we present a family of contrast agents, termed maltodextrin-based imaging probes (MDPs), which can detect bacteria in vivo with a sensitivity two orders of magnitude higher than previously reported, and can detect bacteria using a bacteria-specific mechanism that is independent of host response and secondary pathologies. MDPs are composed of a fluorescent dye conjugated to maltohexaose, and are rapidly internalized through the bacteria-specific maltodextrin transport pathway, endowing the MDPs with a unique combination of high sensitivity and specificity for bacteria. Here, we show that MDPs selectively accumulate within bacteria at millimolar concentrations, and are a thousand-fold more specific for bacteria than mammalian cells. Furthermore, we demonstrate that MDPs can image as few as 105 colony-forming units in vivo and can discriminate between active bacteria and inflammation induced by either lipopolysaccharides or metabolically inactive bacteria.

  1. Quantifying domain-ligand affinities and specificities by high-throughput holdup assay

    PubMed Central

    Vincentelli, Renaud; Luck, Katja; Poirson, Juline; Polanowska, Jolanta; Abdat, Julie; Blémont, Marilyne; Turchetto, Jeremy; Iv, François; Ricquier, Kevin; Straub, Marie-Laure; Forster, Anne; Cassonnet, Patricia; Borg, Jean-Paul; Jacob, Yves; Masson, Murielle; Nominé, Yves; Reboul, Jérôme; Wolff, Nicolas; Charbonnier, Sebastian; Travé, Gilles

    2015-01-01

    Many protein interactions are mediated by small linear motifs interacting specifically with defined families of globular domains. Quantifying the specificity of a motif requires measuring and comparing its binding affinities to all its putative target domains. To this aim, we developed the high-throughput holdup assay, a chromatographic approach that can measure up to a thousand domain-motif equilibrium binding affinities per day. Extracts of overexpressed domains are incubated with peptide-coated resins and subjected to filtration. Binding affinities are deduced from microfluidic capillary electrophoresis of flow-throughs. After benchmarking the approach on 210 PDZ-peptide pairs with known affinities, we determined the affinities of two viral PDZ-binding motifs derived from Human Papillomavirus E6 oncoproteins for 209 PDZ domains covering 79% of the human PDZome. We obtained exquisite sequence-dependent binding profiles, describing quantitatively the PDZome recognition specificity of each motif. This approach, applicable to many categories of domain-ligand interactions, has a wide potential for quantifying the specificities of interactomes. PMID:26053890

  2. Global Analysis of Human Nonreceptor Tyrosine Kinase Specificity Using High-Density Peptide Microarrays

    PubMed Central

    2015-01-01

    Protein kinases phosphorylate substrates in the context of specific phosphorylation site sequence motifs. The knowledge of the specific sequences that are recognized by kinases is useful for mapping sites of phosphorylation in protein substrates and facilitates the generation of model substrates to monitor kinase activity. Here, we have adapted a positional scanning peptide library method to a microarray format that is suitable for the rapid determination of phosphorylation site motifs for tyrosine kinases. Peptide mixtures were immobilized on glass slides through a layer of a tyrosine-free Y33F mutant avidin to facilitate the analysis of phosphorylation by radiolabel assay. A microarray analysis provided qualitatively similar results in comparison with the solution phase peptide library “macroarray” method. However, much smaller quantities of kinases were required to phosphorylate peptides on the microarrays, which thus enabled a proteome scale analysis of kinase specificity. We illustrated this capability by microarray profiling more than 80% of the human nonreceptor tyrosine kinases (NRTKs). Microarray results were used to generate a universal NRTK substrate set of 11 consensus peptides for in vitro kinase assays. Several substrates were highly specific for their cognate kinases, which should facilitate their incorporation into kinase-selective biosensors. PMID:25164267

  3. Analytical evaluation of the impact of broad specification fuels on high bypass turbofan engine combustors

    NASA Technical Reports Server (NTRS)

    Taylor, J. R.

    1979-01-01

    Six conceptual combustor designs for the CF6-50 high bypass turbofan engine and six conceptual combustor designs for the NASA/GE E3 high bypass turbofan engine were analyzed to provide an assessment of the major problems anticipated in using broad specification fuels in these aircraft engine combustion systems. Each of the conceptual combustor designs, which are representative of both state-of-the-art and advanced state-of-the-art combustion systems, was analyzed to estimate combustor performance, durability, and pollutant emissions when using commercial Jet A aviation fuel and when using experimental referee board specification fuel. Results indicate that lean burning, low emissions double annular combustor concepts can accommodate a wide range of fuel properties without a serious deterioration of performance or durability. However, rich burning, single annular concepts would be less tolerant to a relaxation of fuel properties. As the fuel specifications are relaxed, autoignition delay time becomes much smaller which presents a serious design and development problem for premixing-prevaporizing combustion system concepts.

  4. High spectral specificity of local chemical components characterization with multichannel shift-excitation Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Kun; Wu, Tao; Wei, Haoyun; Wu, Xuejian; Li, Yan

    2015-09-01

    Raman spectroscopy has emerged as a promising tool for its noninvasive and nondestructive characterization of local chemical structures. However, spectrally overlapping components prevent the specific identification of hyperfine molecular information of different substances, because of limitations in the spectral resolving power. The challenge is to find a way of preserving scattered photons and retrieving hidden/buried Raman signatures to take full advantage of its chemical specificity. Here, we demonstrate a multichannel acquisition framework based on shift-excitation and slit-modulation, followed by mathematical post-processing, which enables a significant improvement in the spectral specificity of Raman characterization. The present technique, termed shift-excitation blind super-resolution Raman spectroscopy (SEBSR), uses multiple degraded spectra to beat the dispersion-loss trade-off and facilitate high-resolution applications. It overcomes a fundamental problem that has previously plagued high-resolution Raman spectroscopy: fine spectral resolution requires large dispersion, which is accompanied by extreme optical loss. Applicability is demonstrated by the perfect recovery of fine structure of the C-Cl bending mode as well as the clear discrimination of different polymorphs of mannitol. Due to its enhanced discrimination capability, this method offers a feasible route at encouraging a broader range of applications in analytical chemistry, materials and biomedicine.

  5. High-Resolution Specificity from DNA Sequencing Highlights Alternative Modes of Lac Repressor Binding

    PubMed Central

    Zuo, Zheng; Stormo, Gary D.

    2014-01-01

    Knowing the specificity of transcription factors is critical to understanding regulatory networks in cells. The lac repressor–operator system has been studied for many years, but not with high-throughput methods capable of determining specificity comprehensively. Details of its binding interaction and its selection of an asymmetric binding site have been controversial. We employed a new method to accurately determine relative binding affinities to thousands of sequences simultaneously, requiring only sequencing of bound and unbound fractions. An analysis of 2560 different DNA sequence variants, including both base changes and variations in operator length, provides a detailed view of lac repressor sequence specificity. We find that the protein can bind with nearly equal affinities to operators of three different lengths, but the sequence preference changes depending on the length, demonstrating alternative modes of interaction between the protein and DNA. The wild-type operator has an odd length, causing the two monomers to bind in alternative modes, making the asymmetric operator the preferred binding site. We tested two other members of the LacI/GalR protein family and find that neither can bind with high affinity to sites with alternative lengths or shows evidence of alternative binding modes. A further comparison with known and predicted motifs suggests that the lac repressor may be unique in this ability and that this may contribute to its selection. PMID:25209146

  6. High spectral specificity of local chemical components characterization with multichannel shift-excitation Raman spectroscopy

    PubMed Central

    Chen, Kun; Wu, Tao; Wei, Haoyun; Wu, Xuejian; Li, Yan

    2015-01-01

    Raman spectroscopy has emerged as a promising tool for its noninvasive and nondestructive characterization of local chemical structures. However, spectrally overlapping components prevent the specific identification of hyperfine molecular information of different substances, because of limitations in the spectral resolving power. The challenge is to find a way of preserving scattered photons and retrieving hidden/buried Raman signatures to take full advantage of its chemical specificity. Here, we demonstrate a multichannel acquisition framework based on shift-excitation and slit-modulation, followed by mathematical post-processing, which enables a significant improvement in the spectral specificity of Raman characterization. The present technique, termed shift-excitation blind super-resolution Raman spectroscopy (SEBSR), uses multiple degraded spectra to beat the dispersion-loss trade-off and facilitate high-resolution applications. It overcomes a fundamental problem that has previously plagued high-resolution Raman spectroscopy: fine spectral resolution requires large dispersion, which is accompanied by extreme optical loss. Applicability is demonstrated by the perfect recovery of fine structure of the C-Cl bending mode as well as the clear discrimination of different polymorphs of mannitol. Due to its enhanced discrimination capability, this method offers a feasible route at encouraging a broader range of applications in analytical chemistry, materials and biomedicine. PMID:26350355

  7. How to produce high specific activity tin-117m using alpha particle beam.

    PubMed

    Duchemin, C; Essayan, M; Guertin, A; Haddad, F; Michel, N; Métivier, V

    2016-09-01

    Tin-117m is an interesting radionuclide for both diagnosis and therapy, thanks to the gamma-ray and electron emissions, respectively, resulting from its decay to tin-117g. The high specific activity of tin-117m is required in many medical applications, and it can be obtained using a high energy alpha particle beam and a cadmium target. The experiments performed at the ARRONAX cyclotron (Nantes, France) using an alpha particle beam delivered at 67.4MeV provide a measurement of the excitation function of the Cd-nat(α,x)Sn-117m reaction and the produced contaminants. The Cd-116(α,3n)Sn-117m production cross section has been deduced from these experimental results using natural cadmium. Both production yield and specific activity as a function of the projectile energy have been calculated. These informations help to optimize the irradiation conditions to produce tin-117m with the required specific activity using α particles with a cadmium target. PMID:27344526

  8. High-resolution specificity from DNA sequencing highlights alternative modes of Lac repressor binding.

    PubMed

    Zuo, Zheng; Stormo, Gary D

    2014-11-01

    Knowing the specificity of transcription factors is critical to understanding regulatory networks in cells. The lac repressor-operator system has been studied for many years, but not with high-throughput methods capable of determining specificity comprehensively. Details of its binding interaction and its selection of an asymmetric binding site have been controversial. We employed a new method to accurately determine relative binding affinities to thousands of sequences simultaneously, requiring only sequencing of bound and unbound fractions. An analysis of 2560 different DNA sequence variants, including both base changes and variations in operator length, provides a detailed view of lac repressor sequence specificity. We find that the protein can bind with nearly equal affinities to operators of three different lengths, but the sequence preference changes depending on the length, demonstrating alternative modes of interaction between the protein and DNA. The wild-type operator has an odd length, causing the two monomers to bind in alternative modes, making the asymmetric operator the preferred binding site. We tested two other members of the LacI/GalR protein family and find that neither can bind with high affinity to sites with alternative lengths or shows evidence of alternative binding modes. A further comparison with known and predicted motifs suggests that the lac repressor may be unique in this ability and that this may contribute to its selection. PMID:25209146

  9. Compartment-Specific Bioluminescence Imaging platform for the high-throughput evaluation of antitumor immune function.

    PubMed

    McMillin, Douglas W; Delmore, Jake; Negri, Joseph M; Vanneman, Matthew; Koyama, Shohei; Schlossman, Robert L; Munshi, Nikhil C; Laubach, Jacob; Richardson, Paul G; Dranoff, Glenn; Anderson, Kenneth C; Mitsiades, Constantine S

    2012-04-12

    Conventional assays evaluating antitumor activity of immune effector cells have limitations that preclude their high-throughput application. We adapted the recently developed Compartment-Specific Bioluminescence Imaging (CS-BLI) technique to perform high-throughput quantification of innate antitumor activity and to show how pharmacologic agents (eg, lenalidomide, pomalidomide, bortezomib, and dexamethasone) and autologous BM stromal cells modulate that activity. CS-BLI-based screening allowed us to identify agents that enhance or inhibit innate antitumor cytotoxicity. Specifically, we identified compounds that stimulate immune effector cells against some tumor targets but suppressed their activity against other tumor cells. CS-BLI offers rapid, simplified, and specific evaluation of multiple conditions, including drug treatments and/or cocultures with stromal cells and highlights that immunomodulatory pharmacologic responses can be heterogeneous across different types of tumor cells. This study provides a framework to identify novel immunomodulatory agents and to prioritize compounds for clinical development on the basis of their effect on antitumor immunity. PMID:22289890

  10. Neurogenic bladder: Highly selective rhizotomy of specific dorsal rootlets maybe a better choice.

    PubMed

    Zhu, Genying; Zhou, Mouwang; Wang, Wenting; Zeng, Fanshuo

    2016-02-01

    Spinal cord injury results not only in motor and sensory dysfunctions, but also in loss of normal urinary bladder functions. A number of clinical studies were focused on the strategies for improvement of functions of the bladder. Completely dorsal root rhizotomy or selective specific S2-4 dorsal root rhizotomy suppress autonomic hyper-reflexia but have the same defects: it could cause detrusor and sphincter over-relaxation and loss of reflexive erection in males. So precise operation needs to be considered. We designed an experimental trail to test the possibility on the basis of previous study. We found that different dorsal rootlets which conduct impulses from the detrusor or sphincter can be distinguished by electro-stimulation in SD rats. Highly selective rhizotomy of specific dorsal rootlets could change the intravesical pressure and urethral perfusion pressure respectively. We hypothese that for neurogenic bladder following spinal cord injury, highly selective rhizotomy of specific dorsal rootlets maybe improve the bladder capacity and the detrusor sphincter dyssynergia, and at the same time, the function of other pelvic organ could be maximize retainment. PMID:26643667

  11. Tsunami Detection by High-Frequency Radar Beyond the Continental Shelf - I: Algorithms and Validation on Idealized Case Studies

    NASA Astrophysics Data System (ADS)

    Grilli, Stéphan T.; Grosdidier, Samuel; Guérin, Charles-Antoine

    2015-10-01

    Where coastal tsunami hazard is governed by near-field sources, such as submarine mass failures or meteo-tsunamis, tsunami propagation times may be too small for a detection based on deep or shallow water buoys. To offer sufficient warning time, it has been proposed to implement early warning systems relying on high-frequency (HF) radar remote sensing, that can provide a dense spatial coverage as far offshore as 200-300 km (e.g., for Diginext Ltd.'s Stradivarius radar). Shore-based HF radars have been used to measure nearshore currents (e.g., CODAR SeaSonde® system; http://www.codar.com/ ), by inverting the Doppler spectral shifts, these cause on ocean waves at the Bragg frequency. Both modeling work and an analysis of radar data following the Tohoku 2011 tsunami, have shown that, given proper detection algorithms, such radars could be used to detect tsunami-induced currents and issue a warning. However, long wave physics is such that tsunami currents will only rise above noise and background currents (i.e., be at least 10-15 cm/s), and become detectable, in fairly shallow water which would limit the direct detection of tsunami currents by HF radar to nearshore areas, unless there is a very wide shallow shelf. Here, we use numerical simulations of both HF radar remote sensing and tsunami propagation to develop and validate a new type of tsunami detection algorithm that does not have these limitations. To simulate the radar backscattered signal, we develop a numerical model including second-order effects in both wind waves and radar signal, with the wave angular frequency being modulated by a time-varying surface current, combining tsunami and background currents. In each "radar cell", the model represents wind waves with random phases and amplitudes extracted from a specified (wind speed dependent) energy density frequency spectrum, and includes effects

  12. High throughput, detailed, cell-specific neuroanatomy of dendritic spines using microinjection and confocal microscopy

    PubMed Central

    Dumitriu, Dani; Rodriguez, Alfredo; Morrison, John H.

    2012-01-01

    Morphological features such as size, shape and density of dendritic spines have been shown to reflect important synaptic functional attributes and potential for plasticity. Here we describe in detail a protocol for obtaining detailed morphometric analysis of spines using microinjection of fluorescent dyes, high resolution confocal microscopy, deconvolution and image analysis using NeuronStudio. Recent technical advancements include better preservation of tissue resulting in prolonged ability to microinject, and algorithmic improvements that compensate for the residual Z-smear inherent in all optical imaging. Confocal imaging parameters were probed systematically for the identification of both optimal resolution as well as highest efficiency. When combined, our methods yield size and density measurements comparable to serial section transmission electron microscopy in a fraction of the time. An experiment containing 3 experimental groups with 8 subjects in each can take as little as one month if optimized for speed, or approximately 4 to 5 months if the highest resolution and morphometric detail is sought. PMID:21886104

  13. A Synchronization Algorithm and Implementation for High-Speed Block Codes Applications. Part 4

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Zhang, Yu; Nakamura, Eric B.; Uehara, Gregory T.

    1998-01-01

    Block codes have trellis structures and decoders amenable to high speed CMOS VLSI implementation. For a given CMOS technology, these structures enable operating speeds higher than those achievable using convolutional codes for only modest reductions in coding gain. As a result, block codes have tremendous potential for satellite trunk and other future high-speed communication applications. This paper describes a new approach for implementation of the synchronization function for block codes. The approach utilizes the output of the Viterbi decoder and therefore employs the strength of the decoder. Its operation requires no knowledge of the signal-to-noise ratio of the received signal, has a simple implementation, adds no overhead to the transmitted data, and has been shown to be effective in simulation for received SNR greater than 2 dB.

  14. An accurate dynamical electron diffraction algorithm for reflection high-energy electron diffraction

    NASA Astrophysics Data System (ADS)

    Huang, J.; Cai, C. Y.; Lv, C. L.; Zhou, G. W.; Wang, Y. G.

    2015-12-01

    The conventional multislice method (CMS) method, one of the most popular dynamical electron diffraction calculation procedures in transmission electron microscopy, was introduced to calculate reflection high-energy electron diffraction (RHEED) as it is well adapted to deal with the deviations from the periodicity in the direction parallel to the surface. However, in the present work, we show that the CMS method is no longer sufficiently accurate for simulating RHEED with the accelerating voltage 3-100 kV because of the high-energy approximation. An accurate multislice (AMS) method can be an alternative for more accurate RHEED calculations with reasonable computing time. A detailed comparison of the numerical calculation of the AMS method and the CMS method is carried out with respect to different accelerating voltages, surface structure models, Debye-Waller factors and glancing angles.

  15. Test and evaluation of the HIDEC engine uptrim algorithm. [Highly Integrated Digital Electronic Control for aircraft

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Myers, L. P.

    1986-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemente into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.

  16. A new algorithm for fluid simulation of high density plasma discharges

    NASA Astrophysics Data System (ADS)

    Oh, Seon-Geun; Lee, Young-Jun; Choe, Heehwan; Jeon, Jae-Hong; Seo, Jong-Hyun

    2013-10-01

    Low temperature, high density plasma sources are widely used for the electronic device fabrications such as semiconductor, flat panel display, and solar cell. The inductively coupled plasma or the capacitively coupled plasma reactors are typical ones in these processes. Fluid simulation is one of the methods for transport modeling of high density discharge, because the profiles of plasma quantities are easily obtained. The short shielding time scale of an electric field perturbation is a major restriction on the simulation time step. In most cases, the simulation time step in the explicit method is less than 10-13 sec. To overcome this limitation, a new method for steady-state fluid simulation of high density plasma discharge is suggested. Following the physical origin of restriction on simulation time step, a new method is developed using both analytic and numerical methods. A simple application of the new method with previously known one is given to study the validity of the method. This work was supported in part by the International collaborative R&D program (N0000678), and by the Industrial Strategic Technology Development Program (10041681) funded by the Ministry of Trade, Industry and Energy (MOTIE, Korea).

  17. Algorithm for statistical simulation of electromagnetic compatibility characteristics of high-frequency transmission channels in radioelectronic facilities

    NASA Astrophysics Data System (ADS)

    Ilin, Y. M.; Mints, S. V.

    1985-03-01

    Considering that electromagnetic compatibility of radioelectronic equipment and its dependence on the compatibility of individual components are important criteria for the design and optimization of high-frequency transmission channels as well as for selecting their mode of operation, a statistical mathematical model is proposed for describing the spectral characteristics of such channels. This model assumes linear interaction of incoming parasitic signal components with the useful signal component in a nonlinear input amplifier. it also assumes that the spectral characteritics of individual equipment components are noncorrelated. The algorithm of statistical simulation and subsequent optimization on this basis consists of four successive steps: (1) analysis of equipment performance requirements; (2) structural synthesis of channel and particularization of its mocrowave components; (3) analysis of channel structure; (4) analysis of a statistical model, followed by optimization of channel components including microwave devices.

  18. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  19. Selection of DNA aptamers against epidermal growth factor receptor with high affinity and specificity

    SciTech Connect

    Wang, Deng-Liang; Song, Yan-Ling; Zhu, Zhi; Li, Xi-Lan; Zou, Yuan; Yang, Hai-Tao; Wang, Jiang-Jie; Yao, Pei-Sen; Pan, Ru-Jun; Yang, Chaoyong James; Kang, De-Zhi

    2014-10-31

    Highlights: • This is the first report of DNA aptamer against EGFR in vitro. • Aptamer can bind targets with high affinity and selectivity. • DNA aptamers are more stable, cheap and efficient than RNA aptamers. • Our selected DNA aptamer against EGFR has high affinity with K{sub d} 56 ± 7.3 nM. • Our selected DNA aptamer against EGFR has high selectivity. - Abstract: Epidermal growth factor receptor (EGFR/HER1/c-ErbB1), is overexpressed in many solid cancers, such as epidermoid carcinomas, malignant gliomas, etc. EGFR plays roles in proliferation, invasion, angiogenesis and metastasis of malignant cancer cells and is the ideal antigen for clinical applications in cancer detection, imaging and therapy. Aptamers, the output of the systematic evolution of ligands by exponential enrichment (SELEX), are DNA/RNA oligonucleotides which can bind protein and other substances with specificity. RNA aptamers are undesirable due to their instability and high cost of production. Conversely, DNA aptamers have aroused researcher’s attention because they are easily synthesized, stable, selective, have high binding affinity and are cost-effective to produce. In this study, we have successfully identified DNA aptamers with high binding affinity and selectivity to EGFR. The aptamer named TuTu22 with K{sub d} 56 ± 7.3 nM was chosen from the identified DNA aptamers for further study. Flow cytometry analysis results indicated that the TuTu22 aptamer was able to specifically recognize a variety of cancer cells expressing EGFR but did not bind to the EGFR-negative cells. With all of the aforementioned advantages, the DNA aptamers reported here against cancer biomarker EGFR will facilitate the development of novel targeted cancer detection, imaging and therapy.

  20. High efficiency site-specific genetic engineering of the mosquito genome

    PubMed Central

    Nimmo, D. D.; Alphey, L.; Meredith, J. M.; Eggleston, P.

    2006-01-01

    Current techniques for the genetic engineering of insect genomes utilize transposable genetic elements, which are inefficient, have limited carrying capacity and give rise to position effects and insertional mutagenesis. As an alternative, we investigated two site-specific integration mechanisms in the yellow fever mosquito, Aedes aegypti. One was a modified CRE/lox system from phage P1 and the other a viral integrase system from Streptomyces phage phi C31. The modified CRE/lox system consistently failed to produce stable germ-line transformants but the phi C31 system was highly successful, increasing integration efficiency by up to 7.9-fold. The ability to efficiently target transgenes to specific chromosomal locations and the potential to integrate very large transgenes has broad applicability to research on many medically and economically important species. PMID:16640723

  1. A novel fluorescent reagent for recognition of triplex DNA with high specificity and selectivity.

    PubMed

    Chen, Zongbao; Zhang, Huimi; Ma, Xiaoming; Lin, Zhenyu; Zhang, Lan; Chen, Guonan

    2015-11-21

    A fluorescent agent (DMT) was screened for recognizing triplex DNA with a specific and selective characteristic, which was embedded into the triplex DNA structure. The triplex DNA was firstly formed by a complementary target sequence through two distinct and sequential events. The conditions including pH and hybridization time, fluorescent agent concentration and embedding time were optimized in the experiment. Under the optimum conditions, the fluorescence signal was enhanced up to 9-fold in comparison with the DMT embedding into the ssDNA, dsDNA and G-quadruplexes. Under the same fluorescence conditions, the changes of the fluorescence signal were also investigated by several kinds of base mismatched DNAs in the experiment. The results showed that our biosensor provided excellent discrimination efficiency toward the perfectly mismatched target DNA with no formation of triplex DNA. We preliminarily deduced the mechanism of the fluorescent reagent for recognizing triplex DNA with high specificity and selectivity. PMID:26456316

  2. High Resolution X Chromosome-Specific Array-CGH Detects New CNVs in Infertile Males

    PubMed Central

    Krausz, Csilla; Giachini, Claudia; Lo Giacco, Deborah; Daguin, Fabrice; Chianese, Chiara; Ars, Elisabet; Ruiz-Castane, Eduard; Forti, Gianni; Rossi, Elena

    2012-01-01

    Context The role of CNVs in male infertility is poorly defined, and only those linked to the Y chromosome have been the object of extensive research. Although it has been predicted that the X chromosome is also enriched in spermatogenesis genes, no clinically relevant gene mutations have been identified so far. Objectives In order to advance our understanding of the role of X-linked genetic factors in male infertility, we applied high resolution X chromosome specific array-CGH in 199 men with different sperm count followed by the analysis of selected, patient-specific deletions in large groups of cases and normozoospermic controls. Results We identified 73 CNVs, among which 55 are novel, providing the largest collection of X-linked CNVs in relation to spermatogenesis. We found 12 patient-specific deletions with potential clinical implication. Cancer Testis Antigen gene family members were the most frequently affected genes, and represent new genetic targets in relationship with altered spermatogenesis. One of the most relevant findings of our study is the significantly higher global burden of deletions in patients compared to controls due to an excessive rate of deletions/person (0.57 versus 0.21, respectively; p = 8.785×10−6) and to a higher mean sequence loss/person (11.79 Kb and 8.13 Kb, respectively; p = 3.435×10−4). Conclusions By the analysis of the X chromosome at the highest resolution available to date, in a large group of subjects with known sperm count we observed a deletion burden in relation to spermatogenic impairment and the lack of highly recurrent deletions on the X chromosome. We identified a number of potentially important patient-specific CNVs and candidate spermatogenesis genes, which represent novel targets for future investigations. PMID:23056185

  3. ADONIS, high count-rate HP-Ge {gamma} spectrometry algorithm: Irradiated fuel assembly measurement

    SciTech Connect

    Pin, P.; Barat, E.; Dautremer, T.; Montagu, T.; Normand, S.

    2011-07-01

    ADONIS is a digital system for gamma-ray spectrometry, developed by CEA. This system achieves high count-rate gamma-ray spectrometry with correct dynamic dead-time correction, up to, at least, more than an incoming count rate of 3.10{sup 6} events per second. An application of such a system at AREVA NC's La Hague plant is the irradiated fuel scanning facility before reprocessing. The ADONIS system is presented, then the measurement set-up and, last, the measurement results with reference measurements. (authors)

  4. Dimeric CRISPR RNA-guided FokI nucleases for highly specific genome editing.

    PubMed

    Tsai, Shengdar Q; Wyvekens, Nicolas; Khayter, Cyd; Foden, Jennifer A; Thapar, Vishal; Reyon, Deepak; Goodwin, Mathew J; Aryee, Martin J; Joung, J Keith

    2014-06-01

    Monomeric CRISPR-Cas9 nucleases are widely used for targeted genome editing but can induce unwanted off-target mutations with high frequencies. Here we describe dimeric RNA-guided FokI nucleases (RFNs) that can recognize extended sequences and edit endogenous genes with high efficiencies in human cells. RFN cleavage activity depends strictly on the binding of two guide RNAs (gRNAs) to DNA with a defined spacing and orientation substantially reducing the likelihood that a suitable target site will occur more than once in the genome and therefore improving specificities relative to wild-type Cas9 monomers. RFNs guided by a single gRNA generally induce lower levels of unwanted mutations than matched monomeric Cas9 nickases. In addition, we describe a simple method for expressing multiple gRNAs bearing any 5' end nucleotide, which gives dimeric RFNs a broad targeting range. RFNs combine the ease of RNA-based targeting with the specificity enhancement inherent to dimerization and are likely to be useful in applications that require highly precise genome editing. PMID:24770325

  5. Inexpensive Designer Antigen for Anti-HIV Antibody Detection with High Sensitivity and Specificity

    PubMed Central

    Talha, Sheikh M.; Salminen, Teppo; Chugh, Deepti A.; Swaminathan, Sathyamangalam; Soukka, Tero; Pettersson, Kim; Khanna, Navin

    2010-01-01

    A novel recombinant multiepitope protein (MEP) has been designed that consists of four linear, immunodominant, and phylogenetically conserved epitopes, taken from human immunodeficiency virus (HIV)-encoded antigens that are used in many third-generation immunoassay kits. This HIV-MEP has been evaluated for its diagnostic potential in the detection of anti-HIV antibodies in human sera. A synthetic MEP gene encoding these epitopes, joined by flexible peptide linkers in a single open reading frame, was designed and overexpressed in Escherichia coli. The recombinant HIV-MEP was purified using a single affinity step, yielding >20 mg pure protein/liter culture, and used as the coating antigen in an in-house immunoassay. Bound anti-HIV antibodies were detected by highly sensitive time-resolved fluorometry, using europium(III) chelate-labeled anti-human antibody. The sensitivity and specificity of the HIV-MEP were evaluated using Boston Biomedica worldwide HIV performance, HIV seroconversion, and viral coinfection panels and were found to be comparable with those of commercially available anti-HIV enzyme immunoassay (EIA) kits. The careful choice of epitopes, high epitope density, and an E. coli-based expression system, coupled with a simple purification protocol and the use of europium(III) chelate-labeled tracer, provide the capability for the development of an inexpensive diagnostic test with high degrees of sensitivity and specificity. PMID:20089793

  6. Endophytic Fungal Communities Associated with Vascular Plants in the High Arctic Zone Are Highly Diverse and Host-Plant Specific.

    PubMed

    Zhang, Tao; Yao, Yi-Feng

    2015-01-01

    This study assessed the diversity and distribution of endophytic fungal communities associated with the leaves and stems of four vascular plant species in the High Arctic using 454 pyrosequencing with fungal-specific primers targeting the ITS region. Endophytic fungal communities showed high diversity. The 76,691 sequences obtained belonged to 250 operational taxonomic units (OTUs). Of these OTUs, 190 belonged to Ascomycota, 50 to Basidiomycota, 1 to Chytridiomycota, and 9 to unknown fungi. The dominant orders were Helotiales, Pleosporales, Capnodiales, and Tremellales, whereas the common known fungal genera were Cryptococcus, Rhizosphaera, Mycopappus, Melampsora, Tetracladium, Phaeosphaeria, Mrakia, Venturia, and Leptosphaeria. Both the climate and host-related factors might shape the fungal communities associated with the four Arctic plant species in this region. These results suggested the presence of an interesting endophytic fungal community and could improve our understanding of fungal evolution and ecology in the Arctic terrestrial ecosystems. PMID:26067836

  7. Endophytic Fungal Communities Associated with Vascular Plants in the High Arctic Zone Are Highly Diverse and Host-Plant Specific

    PubMed Central

    Zhang, Tao; Yao, Yi-Feng

    2015-01-01

    This study assessed the diversity and distribution of endophytic fungal communities associated with the leaves and stems of four vascular plant species in the High Arctic using 454 pyrosequencing with fungal-specific primers targeting the ITS region. Endophytic fungal communities showed high diversity. The 76,691 sequences obtained belonged to 250 operational taxonomic units (OTUs). Of these OTUs, 190 belonged to Ascomycota, 50 to Basidiomycota, 1 to Chytridiomycota, and 9 to unknown fungi. The dominant orders were Helotiales, Pleosporales, Capnodiales, and Tremellales, whereas the common known fungal genera were Cryptococcus, Rhizosphaera, Mycopappus, Melampsora, Tetracladium, Phaeosphaeria, Mrakia, Venturia, and Leptosphaeria. Both the climate and host-related factors might shape the fungal communities associated with the four Arctic plant species in this region. These results suggested the presence of an interesting endophytic fungal community and could improve our understanding of fungal evolution and ecology in the Arctic terrestrial ecosystems. PMID:26067836

  8. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    SciTech Connect

    Chen Ting; Kim, Sung; Goyal, Sharad; Jabbour, Salma; Zhou Jinghao; Rajagopal, Gunaretnum; Haffty, Bruce; Yue Ning

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintain the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a

  9. Specific high-affinity binding of high density lipoproteins to cultured human skin fibroblasts and arterial smooth muscle cells.

    PubMed

    Biesbroeck, R; Oram, J F; Albers, J J; Bierman, E L

    1983-03-01

    Binding of human high density lipoproteins (HDL, d = 1.063-1.21) to cultured human fibroblasts and human arterial smooth muscle cells was studied using HDL subjected to heparin-agarose affinity chromatography to remove apoprotein (apo) E and B. Saturation curves for binding of apo E-free 125I-HDL showed at least two components: low-affinity nonsaturable binding and high-affinity binding that saturated at approximately 20 micrograms HDL protein/ml. Scatchard analysis of high-affinity binding of apo E-free 125I-HDL to normal fibroblasts yielded plots that were significantly linear, indicative of a single class of binding sites. Saturation curves for binding of both 125I-HDL3 (d = 1.125-1.21) and apo E-free 125I-HDL to low density lipoprotein (LDL) receptor-negative fibroblasts also showed high-affinity binding that yielded linear Scatchard plots. On a total protein basis, HDL2 (d = 1.063-1.10), HDL3 and very high density lipoproteins (VHDL, d = 1.21-1.25) competed as effectively as apo E-free HDL for binding of apo E-free 125I-HDL to normal fibroblasts. Also, HDL2, HDL3, and VHDL competed similarly for binding of 125I-HDL3 to LDL receptor-negative fibroblasts. In contrast, LDL was a weak competitor for HDL binding. These results indicate that both human fibroblasts and arterial smooth muscle cells possess specific high affinity HDL binding sites. As indicated by enhanced LDL binding and degradation and increased sterol synthesis, apo E-free HDL3 promoted cholesterol efflux from fibroblasts. These effects also saturated at HDL3 concentrations of 20 micrograms/ml, suggesting that promotion of cholesterol efflux by HDL is mediated by binding to the high-affinity cell surface sites. PMID:6826722

  10. Rapid perceptual adaptation to high gravitoinertial force levels Evidence for context-specific adaptation

    NASA Technical Reports Server (NTRS)

    Lackner, J. R.; Graybiel, A.

    1982-01-01

    Subjects exposed to periodic variations in gravitoinertial force (2-G peak) in parabolic flight maneuvers quickly come to perceive the peak force level as having decreased in intensity. By the end of a 40-parabola flight, the decrease in apparent force is approximately 40%. On successive flight days, the apparent intensity of the force loads seems to decrease as well, indicating a cumulative adaptive effect. None of the subjects reported feeling abnormally 'light' for more than a minute or two after return to 1-G background force levels. The pattern of findings suggests a context-specific adaptation to high-force levels.

  11. Image Registration of High-Resolution Uav Data: the New Hypare Algorithm

    NASA Astrophysics Data System (ADS)

    Bahr, T.; Jin, X.; Lasica, R.; Giessel, D.

    2013-08-01

    Unmanned aerial vehicles play an important role in the present-day civilian and military intelligence. Equipped with a variety of sensors, such as SAR imaging modes, E/O- and IR sensor technology, they are due to their agility suitable for many applications. Hence, the necessity arises to use fusion technologies and to develop them continuously. Here an exact image-to-image registration is essential. It serves as the basis for important image processing operations such as georeferencing, change detection, and data fusion. Therefore we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of 39 still images from a high-resolution image stream, acquired with a Aeryon Photo3S™ camera on an Aeryon Scout micro-UAV™.

  12. High Resolution Doppler Imager FY 2001,2002,2003 Operations and Algorithm Maintenance

    NASA Technical Reports Server (NTRS)

    Skinner, Wilbert

    2004-01-01

    During the performance period of this grant HRDI (High Resolution Doppler Imager) operations remained nominal. The instrument has suffered no loss of scientific capability and operates whenever sufficient power is available. Generally, there are approximately 5-7 days per month when the power level is too low to permit observations. The daily latitude coverage for HRDI measurements in the mesosphere, lower thermosphere (MLT) region are shown.It shows that during the time of this grant, HRDI operations collected data at a rate comparable to that achieved during the UARS (Upper Atmosphere Research Satellite) prime mission (1991 -1995). Data collection emphasized MLT wind to support the validation efforts of the TIDI instrument on TIMED, therefore fulfilling one of the primary objectives of this phase of the UARS mission. Skinner et al., (2003) present a summary of the instrument performance during this period.

  13. Algorithms and methodology used in constructing high-resolution terrain databases

    NASA Astrophysics Data System (ADS)

    Williams, Bryan L.; Wilkosz, Aaron

    1998-07-01

    This paper presents a top-level description of methods used to generate high-resolution 3D IR digital terrain databases using soft photogrammetry. The 3D IR database is derived from aerial photography and is made up of digital ground plane elevation map, vegetation height elevation map, material classification map, object data (tanks, buildings, etc.), and temperature radiance map. Steps required to generate some of these elements are outlined. The use of metric photogrammetry is discussed in the context of elevation map development; and methods employed to generate the material classification maps are given. The developed databases are used by the US Army Aviation and Missile Command to evaluate the performance of various missile systems. A discussion is also presented on database certification which consists of validation, verification, and accreditation procedures followed to certify that the developed databases give a true representation of the area of interest, and are fully compatible with the targeted digital simulators.

  14. High-resolution algorithms of sharp interface treatment for compressible two-phase flows

    NASA Astrophysics Data System (ADS)

    Zhang, Xueying; Yang, Haiting

    2015-03-01

    In this paper, a kind of arbitrary high order derivatives (ADER) scheme based on the generalised Riemann problem is proposed to simulate multi-material flows by a coupling ghost fluid method. The states at cell interfaces are reconstructed by interpolating polynomials which are piece-wise smooth functions. The states are treated as the equivalent of the left and right states of the Riemann problem. The contact solvers are extrapolated in the vicinity of contact points to facilitate ghost fluids. The numerical method is applied to compressible flows with sharp discontinuities, such as the collision of two fluids of different physical states and gas-liquid two-phase flows. The numerical results demonstrate that unexpected physical oscillations through the contact discontinuities can be prevented effectively and the sharp interface can be captured efficiently.

  15. High performance file compression algorithm for video-on-demand e-learning system

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2005-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene: recognizing the a lecturer and a lecture stick by pattern recognition techniques, the video-image compression processing system deletes the figure of a lecturer of low importance and displays only the end point of a lecture stick. It enables us to create the highly compressed lecture video files, which are suitable for the Internet distribution. We compare this technique with the other simple methods such as the lower frame-rate video files, and the ordinary MPEG files. The experimental result shows that the proposed compression processing system is much more effective than the others.

  16. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  17. Image haze removal algorithm for transmission lines based on weighted Gaussian PDF

    NASA Astrophysics Data System (ADS)

    Wang, Wanguo; Zhang, Jingjing; Li, Li; Wang, Zhenli; Li, Jianxiang; Zhao, Jinlong

    2015-03-01

    Histogram specification is a useful algorithm of image enhancement field. This paper proposes an image haze removal algorithm of histogram specification based on the weighted Gaussian probability density function (Gaussian PDF). Firstly, we consider the characteristics of image histogram that captured when sunny, fogging and haze weather. Then, we solve the weak intensity of image specification through changing the variance and weighted Gaussian PDF. The performance of the algorithm could removal the effective of fog and experimental results show the superiority of the proposed algorithm compared with histogram specification. It also has much advantage in respect of low computational complexity, high efficiency, no manual intervention.

  18. Implementation of algorithms to discriminate chemical/biological airbursts from high explosive airbursts utilizing acoustic signatures

    NASA Astrophysics Data System (ADS)

    Hohil, Myron E.; Desai, Sachi; Morcos, Amir

    2006-05-01

    The Army is currently developing acoustic sensor systems that will provide extended range surveillance, detection, and identification for force protection and tactical security. A network of such sensors remotely deployed in conjunction with a central processing node (or gateway) will provide early warning and assessment of enemy threats, near real-time situational awareness to commanders, and may reduce potential hazards to the soldier. In contrast, the current detection of chemical/biological (CB) agents expelled into a battlefield environment is limited to the response of chemical sensors that must be located within close proximity to the CB agent. Since chemical sensors detect hazardous agents through contact, the sensor range to an airburst is the key-limiting factor in identifying a potential CB weapon attack. The associated sensor reporting latencies must be minimized to give sufficient preparation time to field commanders, who must assess if an attack is about to occur, has occurred, or if occurred, the type of agent that soldiers might be exposed to. The long-range propagation of acoustic blast waves from heavy artillery blasts, which are typical in a battlefield environment, introduces a feature for using acoustics and other sensor suite technologies for the early detection and identification of CB threats. Employing disparate sensor technologies implies that warning of a potential CB attack can be provided to the solider more rapidly and from a safer distance when compared to current conventional methods. Distinct characteristics arise within the different airburst signatures because High Explosive (HE) warheads emphasize concussive and shrapnel effects, while chemical/biological warheads are designed to disperse their contents over immense areas, therefore utilizing a slower burning, less intensive explosion to mix and distribute their contents. Highly reliable discrimination (100%) has been demonstrated at the Portable Area Warning Surveillance System

  19. Implementation of algorithms to discriminate between chemical/biological airbursts and high explosive airbursts

    NASA Astrophysics Data System (ADS)

    Hohil, Myron E.; Desai, Sachi; Morcos, Amir

    2006-09-01

    The Army is currently developing acoustic sensor systems that will provide extended range surveillance, detection, and identification for force protection and tactical security. A network of such sensors remotely deployed in conjunction with a central processing node (or gateway) will provide early warning and assessment of enemy threats, near real-time situational awareness to commanders, and may reduce potential hazards to the soldier. In contrast, the current detection of chemical/biological (CB) agents expelled into a battlefield environment is limited to the response of chemical sensors that must be located within close proximity to the CB agent. Since chemical sensors detect hazardous agents through contact, the sensor range to an airburst is the key-limiting factor in identifying a potential CB weapon attack. The associated sensor reporting latencies must be minimized to give sufficient preparation time to field commanders, who must assess if an attack is about to occur, has occurred, or if occurred, the type of agent that soldiers might be exposed to. The long-range propagation of acoustic blast waves from heavy artillery blasts, which are typical in a battlefield environment, introduces a feature for using acoustics and other sensor suite technologies for the early detection and identification of CB threats. Employing disparate sensor technologies implies that warning of a potential CB attack can be provided to the solider more rapidly and from a safer distance when compared to current conventional methods. Distinct characteristics arise within the different airburst signatures because High Explosive (HE) warheads emphasize concussive and shrapnel effects, while chemical/biological warheads are designed to disperse their contents over immense areas, therefore utilizing a slower burning, less intensive explosion to mix and distribute their contents. Highly reliable discrimination (100%) has been demonstrated at the Portable Area Warning Surveillance System

  20. How Does Fingolimod (Gilenya®) Fit in the Treatment Algorithm for Highly Active Relapsing-Remitting Multiple Sclerosis?

    PubMed Central

    Fazekas, Franz; Bajenaru, Ovidiu; Berger, Thomas; Fabjan, Tanja Hojs; Ledinek, Alenka Horvat; Jakab, Gábor; Komoly, Samuel; Kobys, Tetiana; Kraus, Jörg; Kurča, Egon; Kyriakides, Theodoros; Lisý, L'ubomír; Milanov, Ivan; Nehrych, Tetyana; Moskovko, Sergii; Panayiotou, Panayiotis; Jazbec, Saša Šega; Sokolova, Larysa; Taláb, Radomír; Traykov, Latchezar; Turčáni, Peter; Vass, Karl; Vella, Norbert; Voloshyná, Nataliya; Havrdová, Eva

    2013-01-01

    Multiple sclerosis (MS) is a neurological disorder characterized by inflammatory demyelination and neurodegeneration in the central nervous system. Until recently, disease-modifying treatment was based on agents requiring parenteral delivery, thus limiting long-term compliance. Basic treatments such as beta-interferon provide only moderate efficacy, and although therapies for second-line treatment and highly active MS are more effective, they are associated with potentially severe side effects. Fingolimod (Gilenya®) is the first oral treatment of MS and has recently been approved as single disease-modifying therapy in highly active relapsing-remitting multiple sclerosis (RRMS) for adult patients with high disease activity despite basic treatment (beta-interferon) and for treatment-naïve patients with rapidly evolving severe RRMS. At a scientific meeting that took place in Vienna on November 18th, 2011, experts from ten Central and Eastern European countries discussed the clinical benefits and potential risks of fingolimod for MS, suggested how the new therapy fits within the current treatment algorithm and provided expert opinion for the selection and management of patients. PMID:23641231

  1. Dual super-systolic core for real-time reconstructive algorithms of high-resolution radar/SAR imaging systems.

    PubMed

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  2. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash

    PubMed Central

    Pelletier, Mathew G.

    2008-01-01

    One of the main hurdles standing in the way of optimal cleaning of cotton lint is the lack of sensing systems that can react fast enough to provide the control system with real-time information as to the level of trash contamination of the cotton lint. This research examines the use of programmable graphic processing units (GPU) as an alternative to the PC's traditional use of the central processing unit (CPU). The use of the GPU, as an alternative computation platform, allowed for the machine vision system to gain a significant improvement in processing time. By improving the processing time, this research seeks to address the lack of availability of rapid trash sensing systems and thus alleviate a situation in which the current systems view the cotton lint either well before, or after, the cotton is cleaned. This extended lag/lead time that is currently imposed on the cotton trash cleaning control systems, is what is responsible for system operators utilizing a very large dead-band safety buffer in order to ensure that the cotton lint is not under-cleaned. Unfortunately, the utilization of a large dead-band buffer results in the majority of the cotton lint being over-cleaned which in turn causes lint fiber-damage as well as significant losses of the valuable lint due to the excessive use of cleaning machinery. This research estimates that upwards of a 30% reduction in lint loss could be gained through the use of a tightly coupled trash sensor to the cleaning machinery control systems. This research seeks to improve processing times through the development of a new algorithm for cotton trash sensing that allows for implementation on a highly parallel architecture. Additionally, by moving the new parallel algorithm onto an alternative computing platform, the graphic processing unit “GPU”, for processing of the cotton trash images, a speed up of over 6.5 times, over optimized code running on the PC's central processing unit “CPU”, was gained. The new

  3. High throughput peptide mapping method for analysis of site specific monoclonal antibody oxidation.

    PubMed

    Li, Xiaojuan; Xu, Wei; Wang, Yi; Zhao, Jia; Liu, Yan-Hui; Richardson, Daisy; Li, Huijuan; Shameem, Mohammed; Yang, Xiaoyu

    2016-08-19

    Oxidation of therapeutic monoclonal antibodies (mAbs) often occurs on surface exposed methionine and tryptophan residues during their production in cell culture, purification, and storage, and can potentially impact the binding to their targets. Characterization of site specific oxidation is critical for antibody quality control. Antibody oxidation is commonly determined by peptide mapping/LC-MS methods, which normally require a long (up to 24h) digestion step. The prolonged sample preparation procedure could result in oxidation artifacts of susceptible methionine and tryptophan residues. In this paper, we developed a rapid and simple UV based peptide mapping method that incorporates an 8-min trypsin in-solution digestion protocol for analysis of oxidation. This method is able to determine oxidation levels at specific residues of a mAb based on the peptide UV traces within <1h, from either TBHP treated or UV light stressed samples. This is the simplest and fastest method reported thus far for site specific oxidation analysis, and can be applied for routine or high throughput analysis of mAb oxidation during various stability and degradation studies. By using the UV trace, the method allows more accurate measurement than mass spectrometry and can be potentially implemented as a release assay. It has been successfully used to monitor antibody oxidation in real time stability studies. PMID:27432793

  4. Specific Delivery of MiRNA for High Efficient Inhibition of Prostate Cancer by RNA Nanotechnology.

    PubMed

    Binzel, Daniel W; Shu, Yi; Li, Hui; Sun, Meiyan; Zhang, Qunshu; Shu, Dan; Guo, Bin; Guo, Peixuan

    2016-08-01

    Both siRNA and miRNA can serve as powerful gene-silencing reagents but their specific delivery to cancer cells in vivo without collateral damage to healthy cells remains challenging. We report here the application of RNA nanotechnology for specific and efficient delivery of anti-miRNA seed-targeting sequence to block the growth of prostate cancer in mouse models. Utilizing the thermodynamically ultra-stable three-way junction of the pRNA of phi29 DNA packaging motor, RNA nanoparticles were constructed by bottom-up self-assembly containing the anti-prostate-specific membrane antigen (PSMA) RNA aptamer as a targeting ligand and anti-miR17 or anti-miR21 as therapeutic modules. The 16 nm RNase-resistant and thermodynamically stable RNA nanoparticles remained intact after systemic injection in mice and strongly bound to tumors with little or no accumulation in healthy organs 8 hours postinjection, and subsequently repressed tumor growth at low doses with high efficiency. PMID:27125502

  5. Fast algorithm for nonlinear acoustics and high-intensity focused ultrasound modeling

    NASA Astrophysics Data System (ADS)

    Curra, Francesco P.; Kargl, Steven G.; Crum, Lawrence A.

    2001-05-01

    The inhomogeneous characteristics of biological media and the nonlinear nature of sound propagation at high-intensity focused ultrasound (HIFU) regimes make accurate modeling of real HIFU applications a challenging task in terms of computational time and resources. A fast, dynamically adaptive time-domain method that drastically reduces these pitfalls is presented for the solution of multidimensional HIFU problems in complex geometries. The model, based on lifted interpolating second-generation wavelets in a collocation approach, consists of the coupled solution of the full-wave nonlinear equation of sound with the bioheat equation for temperature computation. It accounts for nonlinear acoustic propagation, arbitrary frequency power law for attenuation, multiple reflections, and backscattered fields. The characteristic localization of wavelets in both space and wave number domains allows for accurate simulations of strong material inhomogeneities and steep nonlinear processes at a reduced number of collocation points, while the natural multiresolution analysis of wavelets decomposition introduces automatic grid refinement in regions where localized structures are present. Compared to standard finite-difference or spectral schemes on uniform fine grids, this method shows significant savings in computational time and memory requirements proportional with the dimensionality of the problem. [Work supported by U.S. Army Medical Research Acquisition Activity through the University.

  6. Teaching High School Students Machine Learning Algorithms to Analyze Flood Risk Factors in River Deltas

    NASA Astrophysics Data System (ADS)

    Rose, R.; Aizenman, H.; Mei, E.; Choudhury, N.

    2013-12-01

    High School students interested in the STEM fields benefit most when actively participating, so I created a series of learning modules on how to analyze complex systems using machine-learning that give automated feedback to students. The automated feedbacks give timely responses that will encourage the students to continue testing and enhancing their programs. I have designed my modules to take the tactical learning approach in conveying the concepts behind correlation, linear regression, and vector distance based classification and clustering. On successful completion of these modules, students will learn how to calculate linear regression, Pearson's correlation, and apply classification and clustering techniques to a dataset. Working on these modules will allow the students to take back to the classroom what they've learned and then apply it to the Earth Science curriculum. During my research this summer, we applied these lessons to analyzing river deltas; we looked at trends in the different variables over time, looked for similarities in NDVI, precipitation, inundation, runoff and discharge, and attempted to predict floods based on the precipitation, waves mean, area of discharge, NDVI, and inundation.

  7. Three Recombinant Engineered Antibodies against Recombinant Tags with High Affinity and Specificity

    PubMed Central

    Zhao, Hongyu; Shen, Ao; Xiang, Yang K.; Corey, David P.

    2016-01-01

    We describe three recombinant engineered antibodies against three recombinant epitope tags, constructed with divalent binding arms to recognize divalent epitopes and so achieve high affinity and specificity. In two versions, an epitope is inserted in tandem into a protein of interest, and a homodimeric antibody is constructed by fusing a high-affinity epitope-binding domain to a human or mouse Fc domain. In a third, a heterodimeric antibody is constructed by fusing two different epitope-binding domains which target two different binding sites in GFP, to polarized Fc fragments. These antibody/epitope pairs have affinities in the low picomolar range and are useful tools for many antibody-based applications. PMID:26943906

  8. Establishing Specifications for Low Enriched Uranium Fuel Operations Conducted Outside the High Flux Isotope Reactor Site

    SciTech Connect

    Pinkston, Daniel; Primm, Trent; Renfro, David G; Sease, John D

    2010-10-01

    The National Nuclear Security Administration (NNSA) has funded staff at Oak Ridge National Laboratory (ORNL) to study the conversion of the High Flux Isotope Reactor (HFIR) from the current, high enriched uranium fuel to low enriched uranium fuel. The LEU fuel form is a metal alloy that has never been used in HFIR or any HFIR-like reactor. This report provides documentation of a process for the creation of a fuel specification that will meet all applicable regulations and guidelines to which UT-Battelle, LLC (UTB) the operating contractor for ORNL - must adhere. This process will allow UTB to purchase LEU fuel for HFIR and be assured of the quality of the fuel being procured.

  9. An efficient and robust algorithm for two dimensional time dependent incompressible Navier-Stokes equations - High Reynolds number flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1991-01-01

    An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.

  10. An efficient and robust algorithm for two dimensional time dependent incompressible Navier-Stokes equations: High Reynolds number flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1991-01-01

    An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.

  11. Automata learning algorithms and processes for providing more complete systems requirements specification by scenario generation, CSP-based syntax-oriented model construction, and R2D2C system requirements transformation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Margaria, Tiziana (Inventor); Rash, James L. (Inventor); Rouff, Christopher A. (Inventor); Steffen, Bernard (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, automata learning algorithms and techniques are implemented to generate a more complete set of scenarios for requirements based programming. More specifically, a CSP-based, syntax-oriented model construction, which requires the support of a theorem prover, is complemented by model extrapolation, via automata learning. This may support the systematic completion of the requirements, the nature of the requirement being partial, which provides focus on the most prominent scenarios. This may generalize requirement skeletons by extrapolation and may indicate by way of automatically generated traces where the requirement specification is too loose and additional information is required.

  12. Isolation of a Highly Thermal Stable Lama Single Domain Antibody Specific for Staphylococcus aureus Enterotoxin B

    PubMed Central

    2011-01-01

    Background Camelids and sharks possess a unique subclass of antibodies comprised of only heavy chains. The antigen binding fragments of these unique antibodies can be cloned and expressed as single domain antibodies (sdAbs). The ability of these small antigen-binding molecules to refold after heating to achieve their original structure, as well as their diminutive size, makes them attractive candidates for diagnostic assays. Results Here we describe the isolation of an sdAb against Staphyloccocus aureus enterotoxin B (SEB). The clone, A3, was found to have high affinity (Kd = 75 pM) and good specificity for SEB, showing no cross reactivity to related molecules such as Staphylococcal enterotoxin A (SEA), Staphylococcal enterotoxin D (SED), and Shiga toxin. Most remarkably, this anti-SEB sdAb had an extremely high Tm of 85°C and an ability to refold after heating to 95°C. The sharp Tm determined by circular dichroism, was found to contrast with the gradual decrease observed in intrinsic fluorescence. We demonstrated the utility of this sdAb as a capture and detector molecule in Luminex based assays providing limits of detection (LODs) of at least 64 pg/mL. Conclusion The anti-SEB sdAb A3 was found to have a high affinity and an extraordinarily high Tm and could still refold to recover activity after heat denaturation. This combination of heat resilience and strong, specific binding make this sdAb a good candidate for use in antibody-based toxin detection technologies. PMID:21933444

  13. Heterologous expression and pro-peptide supported refolding of the high specific endopeptidase Lys-C.

    PubMed

    Stressler, Timo; Eisele, Thomas; Meyer, Susanne; Wangler, Julia; Hug, Thomas; Lutz-Wahl, Sabine; Fischer, Lutz

    2016-02-01

    The high specific lysyl endopeptidase (Lys-C; EC 3.4.21.50) is often used for the initial fragmentation of polypeptide chains during protein sequence analysis. However, due to its specificity it could be a useful tool for the production of tailor-made protein hydrolysates with for example bioactive or techno functional properties. Up to now, the high price makes this application nearly impossible. In this work, the increased expression for Escherichia coli optimized Lys-C was investigated. The cloned sequence had a short artificial N-terminal pro-peptide (MGSK). The expression of MGSK-Lys-C was tested using three expression vectors and five E. coli host strains. The highest expression rate was obtained for the expression system consisting of the host strain E. coli JM109 and the rhamnose inducible expression vector pJOE. A Lys-C activity of 9340 ± 555 nkatTos-GPK-pNA/Lculture could be achieved under optimized cultivation conditions after chemical refolding. Furthermore, the influence of the native pre-N-pro peptide of Lys-C from Lysobacter enzymogenes ssp. enzymogenes ATCC 27796 on Lys-C refolding was investigated. The pre-N-pro peptide was expressed recombinantly in E. coli JM109 using the pJOE expression vector. The optimal concentration of the pre-N-pro peptide in the refolding procedure was 100 μg/mLrefolding buffer and the Lys-C activity could be increased to 541,720 nkatTos-GPK-pNA/Lculture. With the results presented, the expensive lysyl endopeptidase can be produced in high activity and high amounts and the potential of Lys-C for tailor-made protein hydrolysates with bioactive (e.g. antihypertensive) and/or techno functional (e.g. foaming, emulsifying) properties can be investigated in future time studies. PMID:26431800

  14. AN ACTIVE-PASSIVE COMBINED ALGORITHM FOR HIGH SPATIAL RESOLUTION RETRIEVAL OF SOIL MOISTURE FROM SATELLITE SENSORS (Invited)

    NASA Astrophysics Data System (ADS)

    Lakshmi, V.; Mladenova, I. E.; Narayan, U.

    2009-12-01

    Soil moisture is known to be an essential factor in controlling the partitioning of rainfall into surface runoff and infiltration and solar energy into latent and sensible heat fluxes. Remote sensing has long proven its capability to obtain soil moisture in near real-time. However, at the present time we have the Advanced Scanning Microwave Radiometer (AMSR-E) on board NASA’s AQUA platform is the only satellite sensor that supplies a soil moisture product. AMSR-E coarse spatial resolution (~ 50 km at 6.9 GHz) strongly limits its applicability for small scale studies. A very promising technique for spatial disaggregation by combining radar and radiometer observations has been demonstrated by the authors using a methodology is based on the assumption that any change in measured brightness temperature and backscatter from one to the next time step is due primarily to change in soil wetness. The approach uses radiometric estimates of soil moisture at a lower resolution to compute the sensitivity of radar to soil moisture at the lower resolution. This estimate of sensitivity is then disaggregated using vegetation water content, vegetation type and soil texture information, which are the variables on which determine the radar sensitivity to soil moisture and are generally available at a scale of radar observation. This change detection algorithm is applied to several locations. We have used aircraft observed active and passive data over Walnut Creek watershed in Central Iowa in 2002; the Little Washita Watershed in Oklahoma in 2003 and the Murrumbidgee Catchment in southeastern Australia for 2006. All of these locations have different soils and land cover conditions which leads to a rigorous test of the disaggregation algorithm. Furthermore, we compare the derived high spatial resolution soil moisture to in-situ sampling and ground observation networks

  15. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO), launched in 2010, is a NASA-designed spacecraft built to study the Sun. SDO has tight pointing requirements and instruments that are sensitive to spacecraft jitter. Two High Gain Antennas (HGAs) are used to continuously send science data to a dedicated ground station. Preflight analysis showed that jitter resulting from motion of the HGAs was a cause for concern. Three jitter mitigation techniques were developed and implemented to overcome effects of jitter from different sources. These mitigation techniques include: the random step delay, stagger stepping, and the No Step Request (NSR). During the commissioning phase of the mission, a jitter test was performed onboard the spacecraft, in which various sources of jitter were examined to determine their level of effect on the instruments. During the HGA portion of the test, the jitter amplitudes from the single step of a gimbal were examined, as well as the amplitudes due to the execution of various gimbal rates. The jitter levels were compared with the gimbal jitter allocations for each instrument. The decision was made to consider implementing two of the jitter mitigating techniques on board the spacecraft: stagger stepping and the NSR. Flight data with and without jitter mitigation enabled was examined, and it is shown in this paper that HGA tracking is not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. The HGA-induced jitter on the instruments is well within the jitter requirement when the stagger step and NSR mitigation options are enabled.

  16. Alterations in Hemagglutinin Receptor-Binding Specificity Accompany the Emergence of Highly Pathogenic Avian Influenza Viruses

    PubMed Central

    Mochalova, Larisa; Harder, Timm; Tuzikov, Alexander; Bovin, Nicolai; Wolff, Thorsten; Matrosovich, Mikhail; Schweiger, Brunhilde

    2015-01-01

    ABSTRACT Highly pathogenic avian influenza viruses (HPAIVs) of hemagglutinin H5 and H7 subtypes emerge after introduction of low-pathogenic avian influenza viruses (LPAIVs) from wild birds into poultry flocks, followed by subsequent circulation and evolution. The acquisition of multiple basic amino acids at the endoproteolytical cleavage site of the hemagglutinin (HA) is a molecular indicator for high pathogenicity, at least for infections of gallinaceous poultry. Apart from the well-studied significance of the multibasic HA cleavage site, there is only limited knowledge on other alterations in the HA and neuraminidase (NA) molecules associated with changes in tropism during the emergence of HPAIVs from LPAIVs. We hypothesized that changes in tropism may require alterations of the sialyloligosaccharide specificities of HA and NA. To test this hypothesis, we compared a number of LPAIVs and HPAIVs for their HA-mediated binding and NA-mediated desialylation of a set of synthetic receptor analogs, namely, α2-3-sialylated oligosaccharides. NA substrate specificity correlated with structural groups of NAs and did not correlate with pathogenic potential of the virus. In contrast, all HPAIVs differed from LPAIVs by a higher HA receptor-binding affinity toward the trisaccharides Neu5Acα2-3Galβ1-4GlcNAcβ (3′SLN) and Neu5Acα2-3Galβ1-3GlcNAcβ (SiaLec) and by the ability to discriminate between the nonfucosylated and fucosylated sialyloligosaccharides 3′SLN and Neu5Acα2-3Galβ1-4(Fucα1-3)GlcNAcβ (SiaLex), respectively. These results suggest that alteration of the receptor-binding specificity accompanies emergence of the HPAIVs from their low-pathogenic precursors. IMPORTANCE Here, we have found for the first time correlations of receptor-binding properties of the HA with a highly pathogenic phenotype of poultry viruses. Our study suggests that enhanced receptor-binding affinity of HPAIVs for a typical “poultry-like” receptor, 3′SLN, is provided by

  17. Application of high-resolution direction-finding algorithms to circular arrays with mutual coupling present, part 2

    NASA Astrophysics Data System (ADS)

    Litva, John; Zeytinoglu, Mehmet

    1990-07-01

    A study of the effects of mutual coupling on the performance on direction finding algorithms is presented. The MUltiple SIgnal Classification (MUSIC) and maximum likelihood (ML) algorithms resolved non-coherent and coherent incident wave fields with relative ease when ideal array response models were used. When array output vectors were simulated using free space and lossy ground models which include mutual coupling effects, the performance of the unmodified direction finding algorithms declined sharply. In particular, estimates from unmodified algorithms exhibited a constant, deterministic bias term. Algorithms modified for this bias term displayed a significant residual bias in attempts to resolve incident wave field with closely spaced source signals. Array steering vectors are the main components through which the array models are coupled. Free space and lossy ground response modes that take mutual coupling effects into consideration were modified by assembling array steering vectors directly from simulated raw data. These modified algorithms exhibited exemplary performance in resolving fields from closely spaced source signals when the array output vectors were derived from the same free space or lossy ground models. The MUSIC algorithm permitted array steering vectors modified with free space response data to be used with the array output derived for the lossy ground response mode. However, it was applicable only to non-coherent source signals. The ML algorithm performed equally well with non-correlated and coherent source signals but it had to be perfectly matched with the actual array response. Computation requirements were very demanding for the ML algorithm.

  18. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is

  19. Domain Specific Changes in Cognition at High Altitude and Its Correlation with Hyperhomocysteinemia

    PubMed Central

    Sharma, Vijay K.; Das, Saroj K.; Dhar, Priyanka; Hota, Kalpana B.; Mahapatra, Bidhu B.; Vashishtha, Vivek; Kumar, Ashish; Hota, Sunil K.; Norboo, Tsering; Srivastava, Ravi B.

    2014-01-01

    Though acute exposure to hypobaric hypoxia is reported to impair cognitive performance, the effects of prolonged exposure on different cognitive domains have been less studied. The present study aimed at investigating the time dependent changes in cognitive performance on prolonged stay at high altitude and its correlation with electroencephalogram (EEG) and plasma homocysteine. The study was conducted on 761 male volunteers of 25–35 years age who had never been to high altitude and baseline data pertaining to domain specific cognitive performance, EEG and homocysteine was acquired at altitude ≤240 m mean sea level (MSL). The volunteers were inducted to an altitude of 4200–4600 m MSL and longitudinal follow-ups were conducted at durations of 03, 12 and 18 months. Neuropsychological assessment was performed for mild cognitive impairment (MCI), attention, information processing rate, visuo-spatial cognition and executive functioning. Total homocysteine (tHcy), vitamin B12 and folic acid were estimated. Mini Mental State Examination (MMSE) showed temporal increase in the percentage prevalence of MCI from 8.17% on 03 months of stay at high altitude to 18.54% on 18 months of stay. Impairment in visuo-spatial executive, attention, delayed recall and procedural memory related cognitive domains were detected following prolonged stay in high altitude. Increase in alpha wave amplitude in the T3, T4 and C3 regions was observed during the follow-ups which was inversely correlated (r = −0.68) to MMSE scores. The tHcy increased proportionately with duration of stay at high altitude and was correlated with MCI. No change in vitamin B12 and folic acid was observed. Our findings suggest that cognitive impairment is progressively associated with duration of stay at high altitude and is correlated with elevated tHcy in the plasma. Moreover, progressive MCI at high altitude occurs despite acclimatization and is independent of vitamin B12 and folic acid. PMID:24988417

  20. Solid-phase reaction synthesis of mesostructured tungsten disulfide material with a high specific surface area

    SciTech Connect

    An, Gaojun; Lu, Changbo; Xiong, Chunhua

    2011-09-15

    Highlights: {yields} WS{sub 2} material was synthesized through solid-phase reaction. {yields} (NH{sub 4}){sub 2}WS{sub 4} as precursor and n-octadecylamine as template. {yields} WS{sub 2} material has high specific surface area (145.9 m{sup 2}/g). {yields} The whole preparation process is simple, convenient, green and clean. -- Abstract: A mesostructured tungsten disulfide (WS{sub 2}) material was prepared through a solid-phase reaction utilizing ammonium tetrathiotungstate as the precursor and n-octadecylamine as the template. The as-synthesized WS{sub 2} material was characterized by X-ray powder Diffraction (XRD), Low-temperature N{sub 2} Adsorption (BET method), Scanning Electron Microscopy (SEM), and Transmission Electron Microscopy (TEM). The characterization results indicate that the WS{sub 2} material has the typical mesopore structure (3.7 nm) with a high specific surface area (145.9 m{sup 2}/g), and large pore volume (0.18 cm{sup 3}/g). This approach is novel, green and convenient. The plausible mechanism for the formation of the mesostructured WS{sub 2} material is discussed herein.