Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms
Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas
2016-01-01
Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640
A proteomics search algorithm specifically designed for high-resolution tandem mass spectra.
Wenger, Craig D; Coon, Joshua J
2013-03-01
The acquisition of high-resolution tandem mass spectra (MS/MS) is becoming more prevalent in proteomics, but most researchers employ peptide identification algorithms that were designed prior to this development. Here, we demonstrate new software, Morpheus, designed specifically for high-mass accuracy data, based on a simple score that is little more than the number of matching products. For a diverse collection of data sets from a variety of organisms (E. coli, yeast, human) acquired on a variety of instruments (quadrupole-time-of-flight, ion trap-orbitrap, and quadrupole-orbitrap) in different laboratories, Morpheus gives more spectrum, peptide, and protein identifications at a 1% false discovery rate (FDR) than Mascot, Open Mass Spectrometry Search Algorithm (OMSSA), and Sequest. Additionally, Morpheus is 1.5 to 4.6 times faster, depending on the data set, than the next fastest algorithm, OMSSA. Morpheus was developed in C# .NET and is available free and open source under a permissive license.
NOSS altimeter algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.
1982-01-01
A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.
[siRNAs with high specificity to the target: a systematic design by CRM algorithm].
Alsheddi, T; Vasin, L; Meduri, R; Randhawa, M; Glazko, G; Baranova, A
2008-01-01
'Off-target' silencing effect hinders the development of siRNA-based therapeutic and research applications. Common solution to this problem is an employment of the BLAST that may miss significant alignments or an exhaustive Smith-Waterman algorithm that is very time-consuming. We have developed a Comprehensive Redundancy Minimizer (CRM) approach for mapping all unique sequences ("targets") 9-to-15 nt in size within large sets of sequences (e.g. transcriptomes). CRM outputs a list of potential siRNA candidates for every transcript of the particular species. These candidates could be further analyzed by traditional "set-of-rules" types of siRNA designing tools. For human, 91% of transcripts are covered by candidate siRNAs with kernel targets of N = 15. We tested our approach on the collection of previously described experimentally assessed siRNAs and found that the correlation between efficacy and presence in CRM-approved set is significant (r = 0.215, p-value = 0.0001). An interactive database that contains a precompiled set of all human siRNA candidates with minimized redundancy is available at http://129.174.194.243. Application of the CRM-based filtering minimizes potential "off-target" silencing effects and could improve routine siRNA applications.
NOSS Altimeter Detailed Algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Mcmillan, J. D.
1982-01-01
The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.
2011-01-01
Background Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. Methods Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. Results HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. Conclusions The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients. PMID:21943091
Schüpbach, Jörg; Bisset, Leslie R; Regenass, Stephan; Bürgisser, Philippe; Gorgievski, Meri; Steffen, Ingrid; Andreutti, Corinne; Martinetti, Gladys; Shah, Cyril; Yerly, Sabine; Klimkait, Thomas; Gebhardt, Martin; Schöni-Affolter, Franziska; Rickenbach, Martin; Barth, J; Battegay, M; Bernascon, E; Böni, J; Bucher, H C; Bürgisser, P; Burton-Jeangros, C; Calmy, A; Cavassini, M; Dubs, R; Egger, M; Elzi, L; Fehr, J; Fischer, M; Flepp, M; Francioli, P; Furrer, H; Fux, C A; Gorgievski, M; Günthard, H; Hasse, B; Hirsch, H H; Hirschel, B; Hösli, I; Kahlert, C; Kaiser, L; Keiser, O; Kind, C; Klimkait, T; Kovari, H; Ledergerber, B; Martinetti, G; Martinez de Tejada, B; Müller, N; Nadal, D; Pantaleo, G; Rauch, A; Regenass, S; Rickenbach, M; Rudin, C; Schmid, P; Schultze, D; Schöni-Affolter, F; Schüpbach, J; Speck, R; Taffé, P; Telenti, A; Trkola, A; Vernazza, P; von Wyl, V; Weber, R; Yerly, S
2011-09-26
Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIA HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. HIV-1 RNA < 50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥ 50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients.
Murungi, Moses; Fulton, Travis; Reyes, Raquel; Matte, Michael; Ntaro, Moses; Mulogo, Edgar; Nyehangane, Dan; Juliano, Jonathan J; Siedner, Mark J; Boum, Yap; Boyce, Ross M
2017-05-01
Poor specificity may negatively impact rapid diagnostic test (RDT)-based diagnostic strategies for malaria. We performed real-time PCR on a subset of subjects who had undergone diagnostic testing with a multiple-antigen (histidine-rich protein 2 and pan-lactate dehydrogenase pLDH [HRP2/pLDH]) RDT and microscopy. We determined the sensitivity and specificity of the RDT in comparison to results of PCR for the detection of Plasmodium falciparum malaria. We developed and evaluated a two-step algorithm utilizing the multiple-antigen RDT to screen patients, followed by confirmatory microscopy for those individuals with HRP2-positive (HRP2(+))/pLDH-negative (pLDH(-)) results. In total, dried blood spots (DBS) were collected from 276 individuals. There were 124 (44.9%) individuals with an HRP2(+)/pLDH(+) result, 94 (34.1%) with an HRP2(+)/pLDH(-) result, and 58 (21%) with a negative RDT result. The sensitivity and specificity of the RDT compared to results with real-time PCR were 99.4% (95% confidence interval [CI], 95.9 to 100.0%) and 46.7% (95% CI, 37.7 to 55.9%), respectively. Of the 94 HRP2(+)/pLDH(-) results, only 32 (34.0%) and 35 (37.2%) were positive by microscopy and PCR, respectively. The sensitivity and specificity of the two-step algorithm compared to results with real-time PCR were 95.5% (95% CI, 90.5 to 98.0%) and 91.0% (95% CI, 84.1 to 95.2), respectively. HRP2 antigen bands demonstrated poor specificity for the diagnosis of malaria compared to that of real-time PCR in a high-transmission setting. The most likely explanation for this finding is the persistence of HRP2 antigenemia following treatment of an acute infection. The two-step diagnostic algorithm utilizing microscopy as a confirmatory test for indeterminate HRP2(+)/pLDH(-) results showed significantly improved specificity with little loss of sensitivity in a high-transmission setting. Copyright © 2017 American Society for Microbiology.
1987-06-01
A182 772 SPECIFICATION AND DESIGN METHODOLOGIES FOR NIGH-SPEED 11 FAULT-TOLERANT ARRA.. CU) CALIFORNIA UNIY LOS ANGELES DEPT OF COMPUTER SCIENCE M D ...ERCEGOVAC ET AL. JUN 0? UNLASSIFIED N611-03--K-S49 F/ 91 ML Ji 1 2. ~ iiii -i ’IfIhIN I_______ IIIII .l n. ’ 3 ’ 3 .3 .5 *. .. w w, - .. .J’. ~ d ...STRUCTURES FOR VLSI Office of Naval Research Contract No. N00014-83-K-0493 Principal Investigator Milo D . Ercegovac ELECTE Co-Principal Ivestigator S AUG 0
High performance FDTD algorithm for GPGPU supercomputers
NASA Astrophysics Data System (ADS)
Zakirov, Andrey; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari
2016-10-01
An implementation of FDTD method for solution of optical and other electrodynamic problems of high computational cost is described. The implementation is based on the LRnLA algorithm DiamondTorre, which is developed specifically for GPGPU hardware. The specifics of the DiamondTorre algorithms for staggered grid (Yee cell) and many-GPU devices are shown. The algorithm is implemented in the software for real physics calculation. The software performance is estimated through algorithms parameters and computer model. The real performance is tested on one GPU device, as well as on the many-GPU cluster. The performance of up to 0.65 • 1012 cell updates per second for 3D domain with 0.3 • 1012 Yee cells total is achieved.
Advanced CHP Control Algorithms: Scope Specification
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.
A high performance graphics algorithm
North, M.J.; Zhao, S.
1994-07-01
Rendering images of complex collections of general objects in multidimensional space is typically very time consuming. The process of displaying such images is often a linear function of the number objects being considered. This behavior can become unacceptable when the number of objects to be considered is large. This paper describes an algorithm to address this problem. The graphics algorithm presented here is composed of a storage algorithm and a selection algorithm. Both of these components are described in relation to a specialized tree data structure. The storage algorithm is proven to exhibit O(n log n) average time behavior and the selection algorithm is proven to exhibit O(log n) average time behavior. This paper focuses on two dimensional images but natural extensions of the basic algorithms to higher dimensional spaces are also considered.
Specific optimization of genetic algorithm on special algebras
NASA Astrophysics Data System (ADS)
Habiballa, Hashim; Novak, Vilem; Dyba, Martin; Schenk, Jiri
2016-06-01
Searching for complex finite algebras can be succesfully done by the means of genetic algorithm as we showed in former works. This genetic algorithm needs specific optimization of crossover and mutation. We present details about these optimizations which are already implemented in software application for this task - EQCreator.
Transculturalization of a diabetes-specific nutrition algorithm: Asian application.
Su, Hsiu-Yueh; Tsang, Man-Wo; Huang, Shih-Yi; Mechanick, Jeffrey I; Sheu, Wayne H-H; Marchetti, Albert
2012-04-01
The prevalence of type 2 diabetes (T2D) in Asia is growing at an alarming rate, posing significant clinical and economic risk to health care stakeholders. Commonly, Asian patients with T2D manifest a distinctive combination of characteristics that include earlier disease onset, distinct pathophysiology, syndrome of complications, and shorter life expectancy. Optimizing treatment outcomes for such patients requires a coordinated inclusive care plan and knowledgeable practitioners. Comprehensive management starts with medical nutrition therapy (MNT) in a broader lifestyle modification program. Implementing diabetes-specific MNT in Asia requires high-quality and transparent clinical practice guidelines (CPGs) that are regionally adapted for cultural, ethnic, and socioeconomic factors. Respected CPGs for nutrition and diabetes therapy are available from prestigious medical societies. For cost efficiency and effectiveness, health care authorities can select these CPGs for Asian implementation following abridgement and cultural adaptation that includes: defining nutrition therapy in meaningful ways, selecting lower cutoff values for healthy body mass indices and waist circumferences (WCs), identifying the dietary composition of MNT based on regional availability and preference, and expanding nutrition therapy for concomitant hypertension, dyslipidemia, overweight/obesity, and chronic kidney disease. An international task force of respected health care professionals has contributed to this process. To date, task force members have selected appropriate evidence-based CPGs and simplified them into an algorithm for diabetes-specific nutrition therapy. Following cultural adaptation, Asian and Asian-Indian versions of this algorithmic tool have emerged. The Asian version is presented in this report.
van Alewijk, Dirk; Kleter, Bernhard; Vent, Maarten; Delroisse, Jean-Marc; de Koning, Maurits; van Doorn, Leen-Jan; Quint, Wim; Colau, Brigitte
2013-04-01
Human papillomavirus (HPV) epidemiological and vaccine studies require highly sensitive HPV detection and genotyping systems. To improve HPV detection by PCR, the broad-spectrum L1-based SPF10 PCR DNA enzyme immunoassay (DEIA) LiPA system and a novel E6-based multiplex type-specific system (MPTS123) that uses Luminex xMAP technology were combined into a new testing algorithm. To evaluate this algorithm, cervical swabs (n = 860) and cervical biopsy specimens (n = 355) were tested, with a focus on HPV types detected by the MPTS123 assay (types 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 66, 68, 6, and 11). Among the HPV-positive samples, identifications of individual HPV genotypes were compared. When all MPTS123 targeted genotypes were considered together, good overall agreement was found (κ = 0.801, 95% confidence interval [CI], 0.784 to 0.818) with identification by SPF10 LiPA, but significantly more genotypes (P < 0.0001) were identified by the MPTS123 PCR Luminex assay, especially for HPV types 16, 35, 39, 45, 58, and 59. An alternative type-specific assay was evaluated that is based on detection of a limited number of HPV genotypes by type-specific PCR and a reverse hybridization assay (MPTS12 RHA). This assay showed results similar to those of the expanded MPTS123 Luminex assay. These results confirm the fact that broad-spectrum PCRs are hampered by type competition when multiple HPV genotypes are present in the same sample. Therefore, a testing algorithm combining the broad-spectrum PCR and a range of type-specific PCRs can offer a highly accurate method for the analysis of HPV infections and diminish the rate of false-negative results and may be particularly useful for epidemiological and vaccine studies.
Kleter, Bernhard; Vent, Maarten; Delroisse, Jean-Marc; de Koning, Maurits; van Doorn, Leen-Jan; Quint, Wim; Colau, Brigitte
2013-01-01
Human papillomavirus (HPV) epidemiological and vaccine studies require highly sensitive HPV detection and genotyping systems. To improve HPV detection by PCR, the broad-spectrum L1-based SPF10 PCR DNA enzyme immunoassay (DEIA) LiPA system and a novel E6-based multiplex type-specific system (MPTS123) that uses Luminex xMAP technology were combined into a new testing algorithm. To evaluate this algorithm, cervical swabs (n = 860) and cervical biopsy specimens (n = 355) were tested, with a focus on HPV types detected by the MPTS123 assay (types 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 66, 68, 6, and 11). Among the HPV-positive samples, identifications of individual HPV genotypes were compared. When all MPTS123 targeted genotypes were considered together, good overall agreement was found (κ = 0.801, 95% confidence interval [CI], 0.784 to 0.818) with identification by SPF10 LiPA, but significantly more genotypes (P < 0.0001) were identified by the MPTS123 PCR Luminex assay, especially for HPV types 16, 35, 39, 45, 58, and 59. An alternative type-specific assay was evaluated that is based on detection of a limited number of HPV genotypes by type-specific PCR and a reverse hybridization assay (MPTS12 RHA). This assay showed results similar to those of the expanded MPTS123 Luminex assay. These results confirm the fact that broad-spectrum PCRs are hampered by type competition when multiple HPV genotypes are present in the same sample. Therefore, a testing algorithm combining the broad-spectrum PCR and a range of type-specific PCRs can offer a highly accurate method for the analysis of HPV infections and diminish the rate of false-negative results and may be particularly useful for epidemiological and vaccine studies. PMID:23363835
Optimization of warfarin dose by population-specific pharmacogenomic algorithm.
Pavani, A; Naushad, S M; Rupasree, Y; Kumar, T R; Malempati, A R; Pinjala, R K; Mishra, R C; Kutala, V K
2012-08-01
To optimize the warfarin dose, a population-specific pharmacogenomic algorithm was developed using multiple linear regression model with vitamin K intake and cytochrome P450 IIC polypeptide9 (CYP2C9(*)2 and (*)3), vitamin K epoxide reductase complex 1 (VKORC1(*)3, (*)4, D36Y and -1639 G>A) polymorphism profile of subjects who attained therapeutic international normalized ratio as predictors. New algorithm was validated by correlating with Wadelius, International Warfarin Pharmacogenetics Consortium and Gage algorithms; and with the therapeutic dose (r=0.64, P<0.0001). New algorithm was more accurate (Overall: 0.89 vs 0.51, warfarin resistant: 0.96 vs 0.77 and warfarin sensitive: 0.80 vs 0.24), more sensitive (0.87 vs 0.52) and specific (0.93 vs 0.50) compared with clinical data. It has significantly reduced the rate of overestimation (0.06 vs 0.50) and underestimation (0.13 vs 0.48). To conclude, this population-specific algorithm has greater clinical utility in optimizing the warfarin dose, thereby decreasing the adverse effects of suboptimal dose.
Model Specification Searches Using Ant Colony Optimization Algorithms
ERIC Educational Resources Information Center
Marcoulides, George A.; Drezner, Zvi
2003-01-01
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
Model Specification Searches Using Ant Colony Optimization Algorithms
ERIC Educational Resources Information Center
Marcoulides, George A.; Drezner, Zvi
2003-01-01
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
Molecular neuropsychology: creation of test-specific blood biomarker algorithms.
O'Bryant, Sid E; Xiao, Guanghua; Barber, Robert; Cullum, C Munro; Weiner, Myron; Hall, James; Edwards, Melissa; Grammas, Paula; Wilhelmsen, Kirk; Doody, Rachelle; Diaz-Arrastia, Ramon
2014-01-01
Prior work on the link between blood-based biomarkers and cognitive status has largely been based on dichotomous classifications rather than detailed neuropsychological functioning. The current project was designed to create serum-based biomarker algorithms that predict neuropsychological test performance. A battery of neuropsychological measures was administered. Random forest analyses were utilized to create neuropsychological test-specific biomarker risk scores in a training set that were entered into linear regression models predicting the respective test scores in the test set. Serum multiplex biomarker data were analyzed on 108 proteins from 395 participants (197 Alzheimer patients and 198 controls) from the Texas Alzheimer's Research and Care Consortium. The biomarker risk scores were significant predictors (p < 0.05) of scores on all neuropsychological tests. With the exception of premorbid intellectual status (6.6%), the biomarker risk scores alone accounted for a minimum of 12.9% of the variance in neuropsychological scores. Biomarker algorithms (biomarker risk scores and demographics) accounted for substantially more variance in scores. Review of the variable importance plots indicated differential patterns of biomarker significance for each test, suggesting the possibility of domain-specific biomarker algorithms. Our findings provide proof of concept for a novel area of scientific discovery, which we term 'molecular neuropsychology'. Copyright © 2013 S. Karger AG, Basel.
Multiscale high-order/low-order (HOLO) algorithms and applications
Chacon, Luis; Chen, Guangye; Knoll, Dana Alan; ...
2016-11-11
Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
Multiscale high-order/low-order (HOLO) algorithms and applications
Chacon, Luis; Chen, Guangye; Knoll, Dana Alan; Newman, Christopher Kyle; Park, HyeongKae; Taitano, William; Willert, Jeff A.; Womeldorff, Geoffrey Alan
2016-11-11
Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.
Multiscale high-order/low-order (HOLO) algorithms and applications
NASA Astrophysics Data System (ADS)
Chacón, L.; Chen, G.; Knoll, D. A.; Newman, C.; Park, H.; Taitano, W.; Willert, J. A.; Womeldorff, G.
2017-02-01
We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.
High rate pulse processing algorithms for microcalorimeters
Rabin, Michael; Hoover, Andrew S; Bacrania, Mnesh K; Tan, Hui; Breus, Dimitry; Henning, Wolfgang; Sabourov, Konstantin; Collins, Jeff; Warburton, William K; Dorise, Bertrand; Ullom, Joel N
2009-01-01
It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensor can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small to maintain good energy resolution, and pulse decay times are normally in the order of milliseconds due to slow thermal relaxation. Consequently, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. Large arrays, however, require as much pulse processing as possible to be performed at the front end of the readout electronics to avoid transferring large amounts of waveform data to a host computer for processing. In this paper, they present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in the readout electronics that they are also currently developing, is to achieve sufficiently good energy resolution for most applications while being (a) simple enough to be implemented in the readout electronics and (b) capable of processing overlapping pulses and thus achieving much higher output count rates than the rates that existing algorithms are currently achieving. Details of these algorithms are presented, and their performance was compared to that of the 'optimal filter' that is the dominant pulse processing algorithm in the cryogenic-detector community.
High specific heat superconducting composite
Steyert, Jr., William A.
1979-01-01
A composite superconductor formed from a high specific heat ceramic such as gadolinium oxide or gadolinium-aluminum oxide and a conventional metal conductor such as copper or aluminum which are insolubly mixed together to provide adiabatic stability in a superconducting mode of operation. The addition of a few percent of insoluble gadolinium-aluminum oxide powder or gadolinium oxide powder to copper, increases the measured specific heat of the composite by one to two orders of magnitude below the 5.degree. K. level while maintaining the high thermal and electrical conductivity of the conventional metal conductor.
Specific PCR product primer design using memetic algorithm.
Yang, Cheng-Hong; Cheng, Yu-Huei; Chuang, Li-Yeh; Chang, Hsueh-Wei
2009-01-01
To provide feasible primer sets for performing a polymerase chain reaction (PCR) experiment, many primer design methods have been proposed. However, the majority of these methods require a relatively long time to obtain an optimal solution since large quantities of template DNA need to be analyzed. Furthermore, the designed primer sets usually do not provide a specific PCR product size. In recent years, evolutionary computation has been applied to PCR primer design and yielded promising results. In this article, a memetic algorithm (MA) is proposed to solve primer design problems associated with providing a specific product size for PCR experiments. The MA is compared with a genetic algorithm (GA) using an accuracy formula to estimate the quality of the primer design and test the running time. Overall, 50 accession nucleotide sequences were sampled for the comparison of the accuracy of the GA and MA for primer design. Five hundred runs of the GA and MA primer design were performed with PCR product lengths of 150-300 bps and 500-800 bps, and two different methods of calculating T(m) for each accession nucleotide sequence were tested. A comparison of the accuracy results for the GA and MA primer design showed that the MA primer design yielded better results than the GA primer design. The results further indicate that the proposed method finds optimal or near-optimal primer sets and effective PCR products in a dry dock experiment. Related materials are available online at http://bio.kuas.edu.tw/ma-pd/.
Specification-based Error Recovery: Theory, Algorithms, and Usability
2013-02-01
The basis of the methodology is a view of the specification as a non-deterministic implementation, which may permit a high degree of non-determinism...developed, optimized and rigorously evaluated in this project. It leveraged the Alloy specification language and its SAT-based tool-set as an enabling...a high degree of non-determinism. The key insight is to use likely correct actions by an otherwise erroneous execu- tion to prune the non-determinism
Improved pulse laser ranging algorithm based on high speed sampling
NASA Astrophysics Data System (ADS)
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
Generic algorithms for high performance scalable geocomputing
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2016-04-01
During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system
Qualls, Joseph; Russomanno, David J.
2011-01-01
The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081
High specific activity silicon-32
Phillips, D.R.; Brzezinski, M.A.
1996-06-11
A process for preparation of silicon-32 is provided and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidation state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.
High specific activity silicon-32
Phillips, Dennis R.; Brzezinski, Mark A.
1996-01-01
A process for preparation of silicon-32 is provided and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution to from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidization state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.
Brambley, Michael R.; Katipamula, Srinivas
2006-10-06
Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.
Design specification for the whole-body algorithm
NASA Technical Reports Server (NTRS)
Fitzjerrell, D. G.
1974-01-01
The necessary requirements and guidelines for the construction of a computer program of the whole-body algorithm are presented. The minimum subsystem models required to effectively simulate the total body response to stresses of interest are (1) cardiovascular (exercise/LBNP/tilt); (2) respiratory (Grodin's model); (3) thermoregulatory (Stolwijk's model); and (4) long-term circulatory fluid and electrolyte (Guyton's model). The whole-body algorithm must be capable of simulating response to stresses from CO2 inhalation, hypoxia, thermal environmental exercise (sitting and supine), LBNP, and tilt (changing body angles in gravity).
On constructing optimistic simulation algorithms for the discrete event system specification
Nutaro, James J
2008-01-01
This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.
GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.
Highly Scalable Matching Pursuit Signal Decomposition Algorithm
NASA Technical Reports Server (NTRS)
Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.
2009-01-01
Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the
Using Genetic Algorithms to Converge on Molecules with Specific Properties
NASA Astrophysics Data System (ADS)
Foster, Stephen; Lindzey, Nathan; Rogers, Jon; West, Carl; Potter, Walt; Smith, Sean; Alexander, Steven
2007-10-01
Although it can be a straightforward matter to determine the properties of a molecule from its structure, the inverse problem is much more difficult. We have chosen to generate molecules by using a genetic algorithm, a computer simulation that models biological evolution and natural selection. By creating a population of randomly generated molecules, we can apply a process of selection, mutation, and recombination to ensure that the best members of the population (i.e. those molecules that possess many of the qualities we are looking for) survive, while the worst members of the population ``die.'' The best members are then modified by random mutation and by ``mating'' with other molecules to produce ``offspring.'' After many hundreds (or thousands) of iterations, one hopes that the population will get better and better---that is, that the properties of the individuals in the population will more and more closely match the properties we want.
Dimensionality Reduction Particle Swarm Algorithm for High Dimensional Clustering
Cui, Xiaohui; ST Charles, Jesse Lee; Potok, Thomas E; Beaver, Justin M
2008-01-01
The Particle Swarm Optimization (PSO) clustering algorithm can generate more compact clustering results than the traditional K-means clustering algorithm. However, when clustering high dimensional datasets, the PSO clustering algorithm is notoriously slow because its computation cost increases exponentially with the size of the dataset dimension. Dimensionality reduction techniques offer solutions that both significantly improve the computation time, and yield reasonably accurate clustering results in high dimensional data analysis. In this paper, we introduce research that combines different dimensionality reduction techniques with the PSO clustering algorithm in order to reduce the complexity of high dimensional datasets and speed up the PSO clustering process. We report significant improvements in total runtime. Moreover, the clustering accuracy of the dimensionality reduction PSO clustering algorithm is comparable to the one that uses full dimension space.
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.
C-element: a new clustering algorithm to find high quality functional modules in PPI networks.
Ghasemi, Mahdieh; Rahgozar, Maseud; Bidkhori, Gholamreza; Masoudi-Nejad, Ali
2013-01-01
Graph clustering algorithms are widely used in the analysis of biological networks. Extracting functional modules in protein-protein interaction (PPI) networks is one such use. Most clustering algorithms whose focuses are on finding functional modules try either to find a clique like sub networks or to grow clusters starting from vertices with high degrees as seeds. These algorithms do not make any difference between a biological network and any other networks. In the current research, we present a new procedure to find functional modules in PPI networks. Our main idea is to model a biological concept and to use this concept for finding good functional modules in PPI networks. In order to evaluate the quality of the obtained clusters, we compared the results of our algorithm with those of some other widely used clustering algorithms on three high throughput PPI networks from Sacchromyces Cerevisiae, Homo sapiens and Caenorhabditis elegans as well as on some tissue specific networks. Gene Ontology (GO) analyses were used to compare the results of different algorithms. Each algorithm's result was then compared with GO-term derived functional modules. We also analyzed the effect of using tissue specific networks on the quality of the obtained clusters. The experimental results indicate that the new algorithm outperforms most of the others, and this improvement is more significant when tissue specific networks are used.
Sputum smear negative pulmonary tuberculosis: sensitivity and specificity of diagnostic algorithm
2011-01-01
Background The diagnosis of pulmonary tuberculosis in patients with Human Immunodeficiency Virus (HIV) is complicated by the increased presence of sputum smear negative tuberculosis. Diagnosis of smear negative pulmonary tuberculosis is made by an algorithm recommended by the National Tuberculosis and Leprosy Programme that uses symptoms, signs and laboratory results. The objective of this study is to determine the sensitivity and specificity of the tuberculosis treatment algorithm used for the diagnosis of sputum smear negative pulmonary tuberculosis. Methods A cross-section study with prospective enrollment of patients was conducted in Dar-es-Salaam Tanzania. For patients with sputum smear negative, sputum was sent for culture. All consenting recruited patients were counseled and tested for HIV. Patients were evaluated using the National Tuberculosis and Leprosy Programme guidelines and those fulfilling the criteria of having active pulmonary tuberculosis were started on anti tuberculosis therapy. Remaining patients were provided appropriate therapy. A chest X-ray, mantoux test, and Full Blood Picture were done for each patient. The sensitivity and specificity of the recommended algorithm was calculated. Predictors of sputum culture positive were determined using multivariate analysis. Results During the study, 467 subjects were enrolled. Of those, 318 (68.1%) were HIV positive, 127 (27.2%) had sputum culture positive for Mycobacteria Tuberculosis, of whom 66 (51.9%) were correctly treated with anti-Tuberculosis drugs and 61 (48.1%) were missed and did not get anti-Tuberculosis drugs. Of the 286 subjects with sputum culture negative, 107 (37.4%) were incorrectly treated with anti-Tuberculosis drugs. The diagnostic algorithm for smear negative pulmonary tuberculosis had a sensitivity and specificity of 38.1% and 74.5% respectively. The presence of a dry cough, a high respiratory rate, a low eosinophil count, a mixed type of anaemia and presence of a cavity were
Sputum smear negative pulmonary tuberculosis: sensitivity and specificity of diagnostic algorithm.
Swai, Hedwiga F; Mugusi, Ferdinand M; Mbwambo, Jessie K
2011-11-01
The diagnosis of pulmonary tuberculosis in patients with Human Immunodeficiency Virus (HIV) is complicated by the increased presence of sputum smear negative tuberculosis. Diagnosis of smear negative pulmonary tuberculosis is made by an algorithm recommended by the National Tuberculosis and Leprosy Programme that uses symptoms, signs and laboratory results.The objective of this study is to determine the sensitivity and specificity of the tuberculosis treatment algorithm used for the diagnosis of sputum smear negative pulmonary tuberculosis. A cross-section study with prospective enrollment of patients was conducted in Dar-es-Salaam Tanzania. For patients with sputum smear negative, sputum was sent for culture. All consenting recruited patients were counseled and tested for HIV. Patients were evaluated using the National Tuberculosis and Leprosy Programme guidelines and those fulfilling the criteria of having active pulmonary tuberculosis were started on anti tuberculosis therapy. Remaining patients were provided appropriate therapy. A chest X-ray, mantoux test, and Full Blood Picture were done for each patient. The sensitivity and specificity of the recommended algorithm was calculated. Predictors of sputum culture positive were determined using multivariate analysis. During the study, 467 subjects were enrolled. Of those, 318 (68.1%) were HIV positive, 127 (27.2%) had sputum culture positive for Mycobacteria Tuberculosis, of whom 66 (51.9%) were correctly treated with anti-Tuberculosis drugs and 61 (48.1%) were missed and did not get anti-Tuberculosis drugs. Of the 286 subjects with sputum culture negative, 107 (37.4%) were incorrectly treated with anti-Tuberculosis drugs. The diagnostic algorithm for smear negative pulmonary tuberculosis had a sensitivity and specificity of 38.1% and 74.5% respectively. The presence of a dry cough, a high respiratory rate, a low eosinophil count, a mixed type of anaemia and presence of a cavity were found to be predictive of
High-Resolution Array with Prony, MUSIC, and ESPRIT Algorithms
1992-08-25
N avalI Research La bora tory AD-A255 514 Washington, DC 20375-5320 NRL/FR/5324-92-9397 High-resolution Array with Prony, music , and ESPRIT...unlimited t"orm n pprovoiREPORT DOCUMENTATION PAGE OMB. o 0 104 0188 4. TITLE AND SUBTITLE S. FUNDING NUMBERS High-resolution Array with Prony. MUSIC . and...the array high-resolution properties of three algorithms: the Prony algo- rithm, the MUSIC algorithm, and the ESPRIT algorithm. MUSIC has been much
A predictor-corrector guidance algorithm for use in high-energy aerobraking system studies
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Powell, Richard W.
1991-01-01
A three-degree-of-freedom predictor-corrector guidance algorithm has been developed specifically for use in high-energy aerobraking performance evaluations. The present study reports on both the development and capabilities of this guidance algorithm to the design of manned Mars aero-braking vehicles. Atmospheric simulations are performed to demonstrate the applicability of this algorithm and to evaluate the effect of atmospheric uncertainties upon the mission requirements. The off-nominal conditions simulated result from atmospheric density and aerodynamic characteristic mispredictions. The guidance algorithm is also used to provide relief from the high deceleration levels typically encountered in a high-energy aerobraking mission profile. Through this analysis, bank-angle modulation is shown to be an effective means of providing deceleration relief. Furthermore, the capability of the guidance algorithm to manage off-nominal vehicle aerodynamic and atmospheric density variations is demonstrated.
A predictor-corrector guidance algorithm for use in high-energy aerobraking system studies
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Powell, Richard W.
1991-01-01
A three-degree-of-freedom predictor-corrector guidance algorithm has been developed specifically for use in high-energy aerobraking performance evaluations. The present study reports on both the development and capabilities of this guidance algorithm to the design of manned Mars aero-braking vehicles. Atmospheric simulations are performed to demonstrate the applicability of this algorithm and to evaluate the effect of atmospheric uncertainties upon the mission requirements. The off-nominal conditions simulated result from atmospheric density and aerodynamic characteristic mispredictions. The guidance algorithm is also used to provide relief from the high deceleration levels typically encountered in a high-energy aerobraking mission profile. Through this analysis, bank-angle modulation is shown to be an effective means of providing deceleration relief. Furthermore, the capability of the guidance algorithm to manage off-nominal vehicle aerodynamic and atmospheric density variations is demonstrated.
High-speed scanning: an improved algorithm
NASA Astrophysics Data System (ADS)
Nachimuthu, A.; Hoang, Khoi
1995-10-01
In using machine vision for assessing an object's surface quality, many images are required to be processed in order to separate the good areas from the defective ones. Examples can be found in the leather hide grading process; in the inspection of garments/canvas on the production line; in the nesting of irregular shapes into a given surface... . The most common method of subtracting the total area from the sum of defective areas does not give an acceptable indication of how much of the `good' area can be used, particularly if the findings are to be used for the nesting of irregular shapes. This paper presents an image scanning technique which enables the estimation of useable areas within an inspected surface in terms of the user's definition, not the supplier's claims. That is, how much useable area the user can use, not the total good area as the supplier estimated. An important application of the developed technique is in the leather industry where the tanner (the supplier) and the footwear manufacturer (the user) are constantly locked in argument due to disputed quality standards of finished leather hide, which disrupts production schedules and wasted costs in re-grading, re- sorting... . The developed basic algorithm for area scanning of a digital image will be presented. The implementation of an improved scanning algorithm will be discussed in detail. The improved features include Boolean OR operations and many other innovative functions which aim at optimizing the scanning process in terms of computing time and the accurate estimation of useable areas.
An improved dehazing algorithm of aerial high-definition image
NASA Astrophysics Data System (ADS)
Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying
2016-01-01
For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.
Spatially adaptive regularized iterative high-resolution image reconstruction algorithm
NASA Astrophysics Data System (ADS)
Lim, Won Bae; Park, Min K.; Kang, Moon Gi
2000-12-01
High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The
Separation methods for high specific activity radioarsenic
NASA Astrophysics Data System (ADS)
Jurisson, S. S.; Wycoff, D. E.; DeGraffenreid, A.; Embree, M. F.; Ketring, A. R.; Cutler, C. S.; Fassbender, M. E.; Ballard, B.
2012-12-01
Radiopharmaceuticals require the use of high specific activity radionuclides, especially when targeting limited numbers of receptors on tumor surfaces. Two radioisotopes of arsenic (72As and 77As) are potentially useful in diagnostic and therapeutic radiopharmaceuticals. Methods for the production, separation, and isolation of high specific activity 72As and 77As are presented.
On the importance of FIB-SEM specific segmentation algorithms for porous media
Salzer, Martin; Thiele, Simon; Zengerle, Roland; Schmidt, Volker
2014-09-15
A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.
Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration
Masalma, Yahya; Jiao, Yu
2010-10-01
We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.
Comparison of the specificity of implantable dual chamber defibrillator detection algorithms.
Hintringer, Florian; Deibl, Martina; Berger, Thomas; Pachinger, Otmar; Roithinger, Franz Xaver
2004-07-01
The aim of the study was to compare the specificity of dual chamber ICDs detection algorithms for correct classification of supraventricular tachyarrhythmias derived from clinical studies according to their size to detect an impact of sample size on the specificity. Furthermore, the study sought to compare the specificities of detection algorithms calculated from clinical data with the specificity calculated from simulations of tachyarrhythmias. A survey was conducted of all available sources providing data regarding the specificity of five dual chamber ICDs. The specificity was correlated with the number of patients included, number of episodes, and number of supraventricular tachyarrhythmias recorded. The simulation was performed using tachyarrhythmias recorded in the electrophysiology laboratory. The range of the number of patients included into the studies was 78-1,029, the range of the total number of episodes recorded was 362-5,788, and the range of the number of supraventricular tachyarrhythmias used for calculation of the specificity for correct detection of these arrhythmias was 100 (Biotronik) to 1662 (Medtronic). The specificity for correct detection of supraventricular tachyarrhythmias was 90% (Biotronik), 89% (ELA Medical), 89% (Guidant), 68% (Medtronic), and 76% (St. Jude Medical). There was an inverse correlation (r = -0.9, P = 0.037) between the specificity for correct classification of supraventricular tachyarrhythmias and the number of patients. The specificity for correct detection of supraventricular tachyarrhythmias calculated from the simulation after correction for the clinical prevalence of the simulated tachyarrhythmias was 95% (Biotronik), 99% (ELA Medical), 94% (Guidant), 93% (Medtronic), and 92% (St. Jude Medical). In conclusion, the specificity of ICD detection algorithms calculated from clinical studies or registries may depend on the number of patients studied. Therefore, a direct comparison between different detection algorithms
Computing highly specific and mismatch tolerant oligomers efficiently.
Yamada, Tomoyuki; Morishita, Shinichi
2003-01-01
The sequencing of the genomes of a variety of species and the growing databases containing expressed sequence tags (ESTs) and complementary DNAs (cDNAs) facilitate the design of highly specific oligomers for use as genomic markers, PCR primers, or DNA oligo microarrays. The first step in evaluating the specificity of short oligomers of about twenty units in length is to determine the frequencies at which the oligomers occur. However, for oligomers longer than about fifty units this is not efficient, as they usually have a frequency of only 1. A more suitable procedure is to consider the mismatch tolerance of an oligomer, that is, the minimum number of mismatches that allows a given oligomer to match a sub-sequence other than the target sequence anywhere in the genome or the EST database. However, calculating the exact value of mismatch tolerance is computationally costly and impractical. Therefore, we studied the problem of checking whether an oligomer meets the constraint that its mismatch tolerance is no less than a given threshold. Here, we present an efficient dynamic programming algorithm solution that utilizes suffix and height arrays. We demonstrated the effectiveness of this algorithm by efficiently computing a dense list of oligo-markers applicable to the human genome. Experimental results show that the algorithm runs faster than well-known Abrahamson's algorithm by orders of magnitude and is able to enumerate 63% to approximately 79% of qualified oligomers.
Computing highly specific and noise-tolerant oligomers efficiently.
Yamada, Tomoyuki; Morishita, Shinichi
2004-03-01
The sequencing of the genomes of a variety of species and the growing databases containing expressed sequence tags (ESTs) and complementary DNAs (cDNAs) facilitate the design of highly specific oligomers for use as genomic markers, PCR primers, or DNA oligo microarrays. The first step in evaluating the specificity of short oligomers of about 20 units in length is to determine the frequencies at which the oligomers occur. However, for oligomers longer than about fifty units this is not efficient, as they usually have a frequency of only 1. A more suitable procedure is to consider the mismatch tolerance of an oligomer, that is, the minimum number of mismatches that allows a given oligomer to match a substring other than the target sequence anywhere in the genome or the EST database. However, calculating the exact value of mismatch tolerance is computationally costly and impractical. Therefore, we studied the problem of checking whether an oligomer meets the constraint that its mismatch tolerance is no less than a given threshold. Here, we present an efficient dynamic programming algorithm solution that utilizes suffix and height arrays. We demonstrated the effectiveness of this algorithm by efficiently computing a dense list of numerous oligo-markers applicable to the human genome. Experimental results show that the algorithm runs faster than well-known Abrahamson's algorithm by orders of magnitude and is able to enumerate 65% approximately 76% of qualified oligomers.
An algorithm for on-line detection of high frequency oscillations related to epilepsy.
López-Cuevas, Armando; Castillo-Toledo, Bernardino; Medina-Ceja, Laura; Ventura-Mejía, Consuelo; Pardo-Peña, Kenia
2013-06-01
Recent studies suggest that the appearance of signals with high frequency oscillations components in specific regions of the brain is related to the incidence of epilepsy. These oscillations are in general small in amplitude and short in duration, making them difficult to identify. The analysis of these oscillations are particularly important in epilepsy and their study could lead to the development of better medical treatments. Therefore, the development of algorithms for detection of these high frequency oscillations is of great importance. In this work, a new algorithm for automatic detection of high frequency oscillations is presented. This algorithm uses approximate entropy and artificial neural networks to extract features in order to detect and classify high frequency components in electrophysiological signals. In contrast to the existing algorithms, the one proposed here is fast and accurate, and can be implemented on-line, thus reducing the time employed to analyze the experimental electrophysiological signals.
Educational Specifications: University City High School.
ERIC Educational Resources Information Center
Philadelphia School District, PA.
Educational specifications are presented delineating instructional space requirements and relationships for a new high school in Philadelphia, Pennsylvania. These specifications comprise a set of written instructions from which the architect can derive a design concept compatible with current educational needs and adaptable to future changes in…
A fast directional algorithm for high-frequency electromagnetic scattering
Tsuji, Paul; Ying Lexing
2011-06-20
This paper is concerned with the fast solution of high-frequency electromagnetic scattering problems using the boundary integral formulation. We extend the O(N log N) directional multilevel algorithm previously proposed for the acoustic scattering case to the vector electromagnetic case. We also detail how to incorporate the curl operator of the magnetic field integral equation into the algorithm. When combined with a standard iterative method, this results in an almost linear complexity solver for the combined field integral equations. In addition, the butterfly algorithm is utilized to compute the far field pattern and radar cross section with O(N log N) complexity.
Modified algorithm for generating high volume fraction sphere packings
NASA Astrophysics Data System (ADS)
Valera, Roberto Roselló; Morales, Irvin Pérez; Vanmaercke, Simon; Morfa, Carlos Recarey; Cortés, Lucía Argüelles; Casañas, Harold Díaz-Guzmán
2015-06-01
Advancing front packing algorithms have proven to be very efficient in 2D for obtaining high density sets of particles, especially disks. However, the extension of these algorithms to 3D is not a trivial task. In the present paper, an advancing front algorithm for obtaining highly dense sphere packings is presented. It is simpler than other advancing front packing methods in 3D and can also be used with other types of particles. Comparison with respect to other packing methods have been carried out and a significant improvement in the volume fraction (VF) has been observed. Moreover, the quality of packings was evaluated with indicators other than VF. As additional advantage, the number of generated particles with the algorithm is linear with respect to time.
Chaotic substitution for highly autocorrelated data in encryption algorithm
NASA Astrophysics Data System (ADS)
Anees, Amir; Siddiqui, Adil Masood; Ahmed, Fawad
2014-09-01
This paper addresses the major drawback of substitution-box in highly auto-correlated data and proposes a novel chaotic substitution technique for encryption algorithm to sort the problem. Simulation results reveal that the overall strength of the proposed technique for encryption is much stronger than most of the existing encryption techniques. Furthermore, few statistical security analyses have also been done to show the strength of anticipated algorithm.
Highly parallel consistent labeling algorithm suitable for optoelectronic implementation.
Marsden, G C; Kiamilev, F; Esener, S; Lee, S H
1991-01-10
Constraint satisfaction problems require a search through a large set of possibilities. Consistent labeling is a method by which search spaces can be drastically reduced. We present a highly parallel consistent labeling algorithm, which achieves strong k-consistency for any value k and which can include higher-order constraints. The algorithm uses vector outer product, matrix summation, and matrix intersection operations. These operations require local computation with global communication and, therefore, are well suited to a optoelectronic implementation.
Algorithms for high aspect ratio oriented triangulations
NASA Technical Reports Server (NTRS)
Posenau, Mary-Anne K.
1995-01-01
Grid generation plays an integral part in the solution of computational fluid dynamics problems for aerodynamics applications. A major difficulty with standard structured grid generation, which produces quadrilateral (or hexahedral) elements with implicit connectivity, has been the requirement for a great deal of human intervention in developing grids around complex configurations. This has led to investigations into unstructured grids with explicit connectivities, which are primarily composed of triangular (or tetrahedral) elements, although other subdivisions of convex cells may be used. The existence of large gradients in the solution of aerodynamic problems may be exploited to reduce the computational effort by using high aspect ratio elements in high gradient regions. However, the heuristic approaches currently in use do not adequately address this need for high aspect ratio unstructured grids. High aspect ratio triangulations very often produce the large angles that are to be avoided. Point generation techniques based on contour or front generation are judged to be the most promising in terms of being able to handle complicated multiple body objects, with this technique lending itself well to adaptivity. The eventual goal encompasses several phases: first, a partitioning phase, in which the Voronoi diagram of a set of points and line segments (the input set) will be generated to partition the input domain; second, a contour generation phase in which body-conforming contours are used to subdivide the partition further as well as introduce the foundation for aspect ratio control, and; third, a Steiner triangulation phase in which points are added to the partition to enable triangulation while controlling angle bounds and aspect ratio. This provides a combination of the advancing front/contour techniques and refinement. By using a front, aspect ratio can be better controlled. By using refinement, bounds on angles can be maintained, while attempting to minimize
A High Precision Terahertz Wave Image Reconstruction Algorithm
Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang
2016-01-01
With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269
A high capacity 3D steganography algorithm.
Chao, Min-Wen; Lin, Chao-hung; Yu, Cheng-Wei; Lee, Tong-Yee
2009-01-01
In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multilayered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models.
High-order hydrodynamic algorithms for exascale computing
Morgan, Nathaniel Ray
2016-02-05
Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.
NASA Astrophysics Data System (ADS)
e Silva, Ana Costa
2012-01-01
It is a commonly used evaluation strategy to run competing algorithms on a test dataset and state which performs better in average on the whole set. We call this generic evaluation. Although it is important, we believe this type of evaluation is incomplete. In this paper, we propose a methodology for algorithm comparison, which we call specific evaluation. This approach attempts to identify subsets of the data where one algorithm is better than the other. This allows not only knowing each algorithm's strengths and weaknesses better but also constitutes a simple way to develop a combination policy that allows enjoying the best of both. We shall be applying specific evaluation to an experiment that aims at grouping pre-obtained table cells into columns; we demonstrate how it identifies a subset of data for which the on-average least good but faster algorithm is equivalent or better, and how it then manages to create a policy for combining the two competing table column delimitation algorithms.
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
Benefits Assessment of Algorithmically Combining Generic High Altitude Airspace Sectors
NASA Technical Reports Server (NTRS)
Bloem, Michael; Gupta, Pramod; Lai, Chok Fung; Kopardekar, Parimal
2009-01-01
In today's air traffic control operations, sectors that have traffic demand below capacity are combined so that fewer controller teams are required to manage air traffic. Controllers in current operations are certified to control a group of six to eight sectors, known as an area of specialization. Sector combinations are restricted to occur within areas of specialization. Since there are few sector combination possibilities in each area of specialization, human supervisors can effectively make sector combination decisions. In the future, automation and procedures will allow any appropriately trained controller to control any of a large set of generic sectors. The primary benefit of this will be increased controller staffing flexibility. Generic sectors will also allow more options for combining sectors, making sector combination decisions difficult for human supervisors. A sector-combining algorithm can assist supervisors as they make generic sector combination decisions. A heuristic algorithm for combining under-utilized air space sectors to conserve air traffic control resources has been described and analyzed. Analysis of the algorithm and comparisons with operational sector combinations indicate that this algorithm could more efficiently utilize air traffic control resources than current sector combinations. This paper investigates the benefits of using the sector-combining algorithm proposed in previous research to combine high altitude generic airspace sectors. Simulations are conducted in which all the high altitude sectors in a center are allowed to combine, as will be possible in generic high altitude airspace. Furthermore, the algorithm is adjusted to use a version of the simplified dynamic density (SDD) workload metric that has been modified to account for workload reductions due to automatic handoffs and Automatic Dependent Surveillance Broadcast (ADS-B). This modified metric is referred to here as future simplified dynamic density (FSDD). Finally
NASA Technical Reports Server (NTRS)
Lawton, Pat
2004-01-01
The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.
An incremental high-utility mining algorithm with transaction insertion.
Lin, Jerry Chun-Wei; Gan, Wensheng; Hong, Tzung-Pei; Zhang, Binbin
2015-01-01
Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns.
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
Production of high specific activity silicon-32
Phillips, D.R.; Brzezinski, M.A.
1998-12-31
This is the final report of a three-year, Laboratory Directed Research and Development Project (LDRD) at Los Alamos National Laboratory (LANL). There were two primary objectives for the work performed under this project. The first was to take advantage of capabilities and facilities at Los Alamos to produce the radionuclide {sup 32}Si in unusually high specific activity. The second was to combine the radioanalytical expertise at Los Alamos with the expertise at the University of California to develop methods for the application of {sup 32}Si in biological oceanographic research related to global climate modeling. The first objective was met by developing targetry for proton spallation production of {sup 32}Si in KCl targets and chemistry for its recovery in very high specific activity. The second objective was met by developing a validated field-useable, radioanalytical technique, based upon gas-flow proportional counting, to measure the dynamics of silicon uptake by naturally occurring diatoms.
Development of High Specific Strength Envelope Materials
NASA Astrophysics Data System (ADS)
Komatsu, Keiji; Sano, Masa-Aki; Kakuta, Yoshiaki
Progress in materials technology has produced a much more durable synthetic fabric envelope for the non-rigid airship. Flexible materials are required to form airship envelopes, ballonets, load curtains, gas bags and covering rigid structures. Polybenzoxazole fiber (Zylon) and polyalirate fiber (Vectran) show high specific tensile strength, so that we developed membrane using these high specific tensile strength fibers as a load carrier. The main material developed is a Zylon or Vectran load carrier sealed internally with a polyurethane bonded inner gas retention film (EVOH). The external surface provides weather protecting with, for instance, a titanium oxide integrated polyurethane or Tedlar film. The mechanical test results show that tensile strength 1,000 N/cm is attained with weight less than 230g/m2. In addition to the mechanical properties, temperature dependence of the joint strength and solar absorptivity and emissivity of the surface are measured.
Research on High-Specific-Heat Dielectrics
1990-01-31
wellp as related thermodynamic properties , we infer the following conclusions: 1. The exceptionally high C peaks for ZnCr204 andp 2 CdCr204 in the 2...which determine the electric, magnetic, and thermodynamic properties of the system. In addition, we have found from this microscopic analysis that... properties of this lattice will therefore be dominated by the properties of the cluster. The 3 thermodynamic properties such as the energy, the specific
Stride search: A general algorithm for storm detection in high-resolution climate data
Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; ...
2016-04-13
This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less
Stride Search: a general algorithm for storm detection in high-resolution climate data
NASA Astrophysics Data System (ADS)
Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; Mundt, Miranda R.
2016-04-01
This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.
Stride search: A general algorithm for storm detection in high-resolution climate data
Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; Mundt, Miranda R.
2016-04-13
This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.
A DRAM compiler algorithm for high performance VLSI embedded memories
NASA Technical Reports Server (NTRS)
Eldin, A. G.
1992-01-01
In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .
Wp specific methylation of highly proliferated LCLs
Park, Jung-Hoon; Jeon, Jae-Pil; Shim, Sung-Mi; Nam, Hye-Young; Kim, Joon-Woo; Han, Bok-Ghee; Lee, Suman . E-mail: suman@cha.ac.kr
2007-06-29
The epigenetic regulation of viral genes may be important for the life cycle of EBV. We determined the methylation status of three viral promoters (Wp, Cp, Qp) from EBV B-lymphoblastoid cell lines (LCLs) by pyrosequencing. Our pyrosequencing data showed that the CpG region of Wp was methylated, but the others were not. Interestingly, Wp methylation was increased with proliferation of LCLs. Wp methylation was as high as 74.9% in late-passage LCLs, but 25.6% in early-passage LCLs. From two Burkitt's lymphoma cell lines, Wp specific hypermethylation was also found (>80%). Interestingly, the expression of EBNA2 gene which located directly next to Wp was associated with its methylation. Our data suggested that Wp specific methylation may be important for the indicator of the proliferation status of LCLs, and the epigenetic viral gene regulation of EBNA2 gene by Wp should be further defined possibly with other biological processes.
High Performance Parallel Algorithms for Improved Reduced-Order Modeling
2008-05-04
HIGH PERFORMANCE PARALLEL ALGORITHMS FOR IMPROVED REDUCED-ORDER MODELING AFOSR FA9550-05-1-0449 FINAL REPORT Chris Beattie, Jeff Borggaard, Serkan ... Serkan Gugercin and Traian Iliescu (co-PIs) Post-Docs Andrew Duggelby, Alexander Hay and Sonja Schlaugh Students Weston Hunter, Denis Kovacs, Miroslav...internship to discuss his research. [Contacts: Chris Camphouse (937) 255-6326, James Myatt (937) 255-8498] Synergistic Activities 1. Serkan Gugerin, with
SMOS derived sea ice thickness: algorithm baseline, product specifications and initial verification
NASA Astrophysics Data System (ADS)
Tian-Kunze, X.; Kaleschke, L.; Maaß, N.; Mäkynen, M.; Serra, N.; Drusch, M.; Krumpen, T.
2013-12-01
Following the launch of ESA's Soil Moisture and Ocean salinity (SMOS) mission it has been shown that brightness temperatures at a low microwave frequency of 1.4 GHz (L-band) are sensitive to sea ice properties. In a first demonstration study, sea ice thickness has been derived using a semi-empirical algorithm with constant tie-points. Here we introduce a novel iterative retrieval algorithm that is based on a sea ice thermodynamic model and a three-layer radiative transfer model, which explicitly takes variations of ice temperature and ice salinity into account. In addition, ice thickness variations within a SMOS footprint are considered through a statistical thickness distribution function derived from high-resolution ice thickness measurements from NASA's Operation IceBridge campaign. This new algorithm has been used for the continuous operational production of a SMOS based sea ice thickness data set from 2010 on. This data set is compared and validated with estimates from assimilation systems, remote sensing data, and airborne electromagnetic sounding data. The comparisons show that the new retrieval algorithm has a considerably better agreement with the validation data and delivers a more realistic Arctic-wide ice thickness distribution than the algorithm used in the previous study.
Production Of High Specific Activity Copper-67
Jamriska, Sr., David J.; Taylor, Wayne A.; Ott, Martin A.; Fowler, Malcolm; Heaton, Richard C.
2003-10-28
A process for the selective production and isolation of high specific activity Cu.sup.67 from proton-irradiated enriched Zn.sup.70 target comprises target fabrication, target irradiation with low energy (<25 MeV) protons, chemical separation of the Cu.sup.67 product from the target material and radioactive impurities of gallium, cobalt, iron, and stable aluminum via electrochemical methods or ion exchange using both anion and cation organic ion exchangers, chemical recovery of the enriched Zn.sup.70 target material, and fabrication of new targets for re-irradiation is disclosed.
Production Of High Specific Activity Copper-67
Jamriska, Sr., David J.; Taylor, Wayne A.; Ott, Martin A.; Fowler, Malcolm; Heaton, Richard C.
2002-12-03
A process for the selective production and isolation of high specific activity cu.sup.67 from proton-irradiated enriched Zn.sup.70 target comprises target fabrication, target irradiation with low energy (<25 MeV) protons, chemical separation of the Cu.sup.67 product from the target material and radioactive impurities of gallium, cobalt, iron, and stable aluminum via electrochemical methods or ion exchange using both anion and cation organic ion exchangers, chemical recovery of the enriched Zn.sup.70 target material, and fabrication of new targets for re-irradiation is disclosed.
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized
High pressure humidification columns: Design equations, algorithm, and computer code
Enick, R.M.; Klara, S.M.; Marano, J.J.
1994-07-01
This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.
Subsemble: an ensemble method for combining subset-specific algorithm fits.
Sapp, Stephanie; van der Laan, Mark J; Canny, John
2014-01-01
Ensemble methods using the same underlying algorithm trained on different subsets of observations have recently received increased attention as practical prediction tools for massive datasets. We propose Subsemble: a general subset ensemble prediction method, which can be used for small, moderate, or large datasets. Subsemble partitions the full dataset into subsets of observations, fits a specified underlying algorithm on each subset, and uses a clever form of V-fold cross-validation to output a prediction function that combines the subset-specific fits. We give an oracle result that provides a theoretical performance guarantee for Subsemble. Through simulations, we demonstrate that Subsemble can be a beneficial tool for small to moderate sized datasets, and often has better prediction performance than the underlying algorithm fit just once on the full dataset. We also describe how to include Subsemble as a candidate in a SuperLearner library, providing a practical way to evaluate the performance of Subsemlbe relative to the underlying algorithm fit just once on the full dataset.
Subsemble: an ensemble method for combining subset-specific algorithm fits
Sapp, Stephanie; van der Laan, Mark J.; Canny, John
2013-01-01
Ensemble methods using the same underlying algorithm trained on different subsets of observations have recently received increased attention as practical prediction tools for massive datasets. We propose Subsemble: a general subset ensemble prediction method, which can be used for small, moderate, or large datasets. Subsemble partitions the full dataset into subsets of observations, fits a specified underlying algorithm on each subset, and uses a clever form of V-fold cross-validation to output a prediction function that combines the subset-specific fits. We give an oracle result that provides a theoretical performance guarantee for Subsemble. Through simulations, we demonstrate that Subsemble can be a beneficial tool for small to moderate sized datasets, and often has better prediction performance than the underlying algorithm fit just once on the full dataset. We also describe how to include Subsemble as a candidate in a SuperLearner library, providing a practical way to evaluate the performance of Subsemlbe relative to the underlying algorithm fit just once on the full dataset. PMID:24778462
A moving frame algorithm for high Mach number hydrodynamics
NASA Astrophysics Data System (ADS)
Trac, Hy; Pen, Ue-Li
2004-07-01
We present a new approach to Eulerian computational fluid dynamics that is designed to work at high Mach numbers encountered in astrophysical hydrodynamic simulations. Standard Eulerian schemes that strictly conserve total energy suffer from the high Mach number problem and proposed solutions to additionally solve the entropy or thermal energy still have their limitations. In our approach, the Eulerian conservation equations are solved in an adaptive frame moving with the fluid where Mach numbers are minimized. The moving frame approach uses a velocity decomposition technique to define local kinetic variables while storing the bulk kinetic components in a smoothed background velocity field that is associated with the grid velocity. Gravitationally induced accelerations are added to the grid, thereby minimizing the spurious heating problem encountered in cold gas flows. Separately tracking local and bulk flow components allows thermodynamic variables to be accurately calculated in both subsonic and supersonic regions. A main feature of the algorithm, that is not possible in previous Eulerian implementations, is the ability to resolve shocks and prevent spurious heating where both the pre-shock and post-shock fluid are supersonic. The hybrid algorithm combines the high-resolution shock capturing ability of the second-order accurate Eulerian TVD scheme with a low-diffusion Lagrangian advection scheme. We have implemented a cosmological code where the hydrodynamic evolution of the baryons is captured using the moving frame algorithm while the gravitational evolution of the collisionless dark matter is tracked using a particle-mesh N-body algorithm. Hydrodynamic and cosmological tests are described and results presented. The current code is fast, memory-friendly, and parallelized for shared-memory machines.
High specific energy, high capacity nickel-hydrogen cell design
NASA Technical Reports Server (NTRS)
Wheeler, James R.
1993-01-01
A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell has been designed and tested to deliver high capacity at a C/1.5 discharge rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet made at a discharge rate this high in the 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters, performance, and future test plans are described.
Algorithmic Tools for Mining High-Dimensional Cytometry Data.
Chester, Cariad; Maecker, Holden T
2015-08-01
The advent of mass cytometry has led to an unprecedented increase in the number of analytes measured in individual cells, thereby increasing the complexity and information content of cytometric data. Although this technology is ideally suited to the detailed examination of the immune system, the applicability of the different methods for analyzing such complex data is less clear. Conventional data analysis by manual gating of cells in biaxial dot plots is often subjective, time consuming, and neglectful of much of the information contained in a highly dimensional cytometric dataset. Algorithmic data mining has the promise to eliminate these concerns, and several such tools have been applied recently to mass cytometry data. We review computational data mining tools that have been used to analyze mass cytometry data, outline their differences, and comment on their strengths and limitations. This review will help immunologists to identify suitable algorithmic tools for their particular projects. Copyright © 2015 by The American Association of Immunologists, Inc.
Ferreira, Jorge; Correia, Sara; Rocha, Miguel
2017-03-01
Genome-Scale Metabolic Models (GSMMs), mathematical representations of the cell metabolism in different organisms including humans, are resourceful tools to simulate metabolic phenotypes and understand associated diseases, such as obesity, diabetes and cancer. In the last years, different algorithms have been developed to generate tissue-specific metabolic models that simulate different phenotypes for distinct cell types. Hepatocyte cells are one of the main sites of metabolic conversions, mainly due to their diverse physiological functions. Most of the liver's tissue is formed by hepatocytes, being one of the largest and most important organs regarding its biological functions. Hepatocellular carcinoma is, also, one of the most important human cancers with high mortality rates. In this study, we will analyze four different algorithms (MBA, mCADRE, tINIT and FASTCORE) for tissue-specific model reconstruction, based on a template model and two types of data sources: transcriptomics and proteomics. These methods will be applied to the reconstruction of metabolic models for hepatocyte cells and HepG2 cancer cell line. The models will be analyzed and compared under different perspectives, emphasizing their functional analysis considering a set of metabolic liver tasks. The results show that there is no "ideal" algorithm. However, with the current analysis, we were able to retrieve knowledge about the metabolism of the liver.
A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT
NASA Astrophysics Data System (ADS)
Kaftan, Jens N.; Kopaczka, Marcin; Wimmer, Andreas; Platsch, Günther; Declerck, Jérôme
2014-03-01
Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.
Stanley, Nick; Glide-Hurst, Carri; Kim, Jinkoo; Adams, Jeffrey; Li, Shunshan; Wen, Ning; Chetty, Indrin J.; Zhong, Hualiang
2014-01-01
The quality of adaptive treatment planning depends on the accuracy of its underlying deformable image registration (DIR). The purpose of this study is to evaluate the performance of two DIR algorithms, B-spline–based deformable multipass (DMP) and deformable demons (Demons), implemented in a commercial software package. Evaluations were conducted using both computational and physical deformable phantoms. Based on a finite element method (FEM), a total of 11 computational models were developed from a set of CT images acquired from four lung and one prostate cancer patients. FEM generated displacement vector fields (DVF) were used to construct the lung and prostate image phantoms. Based on a fast-Fourier transform technique, image noise power spectrum was incorporated into the prostate image phantoms to create simulated CBCT images. The FEM-DVF served as a gold standard for verification of the two registration algorithms performed on these phantoms. The registration algorithms were also evaluated at the homologous points quantified in the CT images of a physical lung phantom. The results indicated that the mean errors of the DMP algorithm were in the range of 1.0 ~ 3.1 mm for the computational phantoms and 1.9 mm for the physical lung phantom. For the computational prostate phantoms, the corresponding mean error was 1.0–1.9 mm in the prostate, 1.9–2.4 mm in the rectum, and 1.8–2.1 mm over the entire patient body. Sinusoidal errors induced by B-spline interpolations were observed in all the displacement profiles of the DMP registrations. Regions of large displacements were observed to have more registration errors. Patient-specific FEM models have been developed to evaluate the DIR algorithms implemented in the commercial software package. It has been found that the accuracy of these algorithms is patient-dependent and related to various factors including tissue deformation magnitudes and image intensity gradients across the regions of interest. This may
Jimenez, Edward Steven,
2013-09-01
The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.
Optimization of a Turboprop UAV for Maximum Loiter and Specific Power Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Dinc, Ali
2016-09-01
In this study, a genuine code was developed for optimization of selected parameters of a turboprop engine for an unmanned aerial vehicle (UAV) by employing elitist genetic algorithm. First, preliminary sizing of a UAV and its turboprop engine was done, by the code in a given mission profile. Secondly, single and multi-objective optimization were done for selected engine parameters to maximize loiter duration of UAV or specific power of engine or both. In single objective optimization, as first case, UAV loiter time was improved with an increase of 17.5% from baseline in given boundaries or constraints of compressor pressure ratio and burner exit temperature. In second case, specific power was enhanced by 12.3% from baseline. In multi-objective optimization case, where previous two objectives are considered together, loiter time and specific power were increased by 14.2% and 9.7% from baseline respectively, for the same constraints.
Utility of gene-specific algorithms for predicting pathogenicity of uncertain gene variants
Lyon, Elaine; Williams, Marc S; Narus, Scott P; Facelli, Julio C; Mitchell, Joyce A
2011-01-01
The rapid advance of gene sequencing technologies has produced an unprecedented rate of discovery of genome variation in humans. A growing number of authoritative clinical repositories archive gene variants and disease phenotypes, yet there are currently many more gene variants that lack clear annotation or disease association. To date, there has been very limited coverage of gene-specific predictors in the literature. Here the evaluation is presented of “gene-specific” predictor models based on a naïve Bayesian classifier for 20 gene–disease datasets, containing 3986 variants with clinically characterized patient conditions. The utility of gene-specific prediction is then compared with “all-gene” generalized prediction and also with existing popular predictors. Gene-specific computational prediction models derived from clinically curated gene variant disease datasets often outperform established generalized algorithms for novel and uncertain gene variants. PMID:22037892
Finite element solution for energy conservation using a highly stable explicit integration algorithm
NASA Technical Reports Server (NTRS)
Baker, A. J.; Manhardt, P. D.
1972-01-01
Theoretical derivation of a finite element solution algorithm for the transient energy conservation equation in multidimensional, stationary multi-media continua with irregular solution domain closure is considered. The complete finite element matrix forms for arbitrarily irregular discretizations are established, using natural coordinate function representations. The algorithm is embodied into a user-oriented computer program (COMOC) which obtains transient temperature distributions at the node points of the finite element discretization using a highly stable explicit integration procedure with automatic error control features. The finite element algorithm is shown to posses convergence with discretization for a transient sample problem. The condensed form for the specific heat element matrix is shown to be preferable to the consistent form. Computed results for diverse problems illustrate the versatility of COMOC, and easily prepared output subroutines are shown to allow quick engineering assessment of solution behavior.
Production of high specific activity silicon-32
Phillips, Dennis R.; Brzezinski, Mark A.
1994-01-01
A process for preparation of silicon-32 is provide and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution to from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidization state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.
Zhang, Yanjun; Krueger, Dana; Durst, Robert; Lee, Rupo; Wang, David; Seeram, Navindra; Heber, David
2009-03-25
The pomegranate fruit ( Punica granatum ) has become an international high-value crop for the production of commercial pomegranate juice (PJ). The perceived consumer value of PJ is due in large part to its potential health benefits based on a significant body of medical research conducted with authentic PJ. To establish criteria for authenticating PJ, a new International Multidimensional Authenticity Specifications (IMAS) algorithm was developed through consideration of existing databases and comprehensive chemical characterization of 45 commercial juice samples from 23 different manufacturers in the United States. In addition to analysis of commercial juice samples obtained in the United States, data from other analyses of pomegranate juice and fruits including samples from Iran, Turkey, Azerbaijan, Syria, India, and China were considered in developing this protocol. There is universal agreement that the presence of a highly constant group of six anthocyanins together with punicalagins characterizes polyphenols in PJ. At a total sugar concentration of 16 degrees Brix, PJ contains characteristic sugars including mannitol at >0.3 g/100 mL. Ratios of glucose to mannitol of 4-15 and of glucose to fructose of 0.8-1.0 are also characteristic of PJ. In addition, no sucrose should be present because of isomerase activity during commercial processing. Stable isotope ratio mass spectrometry as > -25 per thousand assures that there is no added corn or cane sugar added to PJ. Sorbitol was present at <0.025 g/100 mL; maltose and tartaric acid were not detected. The presence of the amino acid proline at >25 mg/L is indicative of added grape products. Malic acid at >0.1 g/100 mL indicates adulteration with apple, pear, grape, cherry, plum, or aronia juice. Other adulteration methods include the addition of highly concentrated aronia, blueberry, or blackberry juices or natural grape pigments to poor-quality juices to imitate the color of pomegranate juice, which results in
2007-08-01
Advanced non- linear control algorithms applied to design highly maneuverable Autonomous Underwater Vehicles (AUVs) Vladimir Djapic, Jay A. Farrell...hierarchical such that an ”inner loop” non- linear controller (outputs the appropriate thrust values) is the same for all mission scenarios while a...library of ”outer-loop” non- linear controllers are available to implement specific maneuvering scenarios. On top of the outer-loop is the mission planner
A high resolution spectrum reconstruction algorithm using compressive sensing theory
NASA Astrophysics Data System (ADS)
Zheng, Zhaoyu; Liang, Dakai; Liu, Shulin; Feng, Shuqing
2015-07-01
This paper proposes a quick spectrum scanning and reconstruction method using compressive sensing in composite structure. The strain field of corrugated structure is simulated by finite element analysis. Then the reflect spectrum is calculated using an improved transfer matrix algorithm. The K-means singular value decomposition sparse dictionary is trained . In the test the spectrum with limited sample points can be obtained and the high resolution spectrum is reconstructed by solving sparse representation equation. Compared with the other conventional basis, the effect of this method is better. The match rate of the recovered spectrum and the original spectrum is over 95%.
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi
2011-08-01
Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.
Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim
2015-01-01
The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.
Faraoni, David; Willems, Ariane; Romlin, Birgitta S; Belisle, Sylvain; Van der Linden, Philippe
2015-05-01
Although rotational thromboelastometry (ROTEM) is increasingly used to guide haemostatic therapy in a bleeding patient, there is a paucity of data guiding its use in the paediatric population. The objective of this study is to develop an algorithm on the basis of ROTEM values obtained in our paediatric cardiac population to guide the management of the bleeding child. A retrospective analysis. Department of Anaesthesiology, Queen Fabiola Children's University Hospital. Data were collected between September 2010 and January 2012. All children who underwent elective cardiac surgery requiring cardiopulmonary bypass (CPB) were reviewed. None. Significant postoperative bleeding was defined as blood loss more than 10% of the child's estimated blood volume within the first six postoperative hours, dividing our population according to high blood loss (HBL) or low blood loss (LBL). Factors independently associated with postoperative bleeding determined the bleeding probability. Receiving operating characteristics (ROC) curves were constructed with the aim of determining relevant ROTEM parameters (including clot amplitude 10 min after administration of protamine [A10]) to be used in our algorithm. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were determined for the developed algorithm. One-hundred and fifty children were included in our study. Univariate and multivariate logistic regression analysis revealed that preoperative weight (kg), presence of a cyanotic disease (yes/no) and wound closure duration (min) were independent predictors of postoperative bleeding. Analysis of our ROTEM parameters revealed that clotting time (CT) ≥ 111 s, A10 ≤ 38 mm measured on the EXTEM and A10 ≤ 3 mm obtained on the FIBTEM tests were the three relevant parameters to guide haemostatic therapy. If the ROTEM-based algorithm was applied according to the bleeding risk (n = 65), 27 out of 29 of the HBL and 24 out of 36
Accuracy of Optimized Branched Algorithms to Assess Activity-Specific PAEE
Edwards, Andy G.; Hill, James O.; Byrnes, William C.; Browning, Raymond C.
2009-01-01
PURPOSE To assess the activity-specific accuracy achievable by branched algorithm (BA) analysis of simulated daily-living physical activity energy expenditure (PAEE) within a sedentary population. METHODS Sedentary men (n=8) and women (n=8) first performed a treadmill calibration protocol, during which heart rate (HR), accelerometry (ACC), and PAEE were measured in 1-minute epochs. From these data, HR-PAEE, and ACC-PAEE regressions were constructed and used in each of six analytic models to predict PAEE from ACC and HR data collected during a subsequent simulated daily-living protocol. Criterion PAEE was measured during both protocols via indirect calorimetry. The accuracy achieved by each model was assessed by the root mean square of the difference between model-predicted daily–living PAEE and the criterion daily-living PAEE (expressed here as % of mean daily living PAEE). RESULTS Across the range of activities an unconstrained post hoc optimized branched algorithm best predicted criterion PAEE. Estimates using individual calibration were generally more accurate than those using group calibration (14 vs. 16 % error, respectively). These analyses also performed well within each of the six daily-living activities, but systematic errors appeared for several of those activities, which may be explained by an inability of the algorithm to simultaneously accommodate a heterogeneous range of activities. Analyses of between mean square error by subject and activity suggest that optimization involving minimization of RMS for total daily-living PAEE is associated with decreased error between subjects but increased error between activities. CONCLUSION The performance of post hoc optimized branched algorithms may be limited by heterogeneity in the daily-living activities being performed. PMID:19952842
Mayer, Andrew R; Dodd, Andrew B; Ling, Josef M; Wertz, Christopher J; Shaff, Nicholas A; Bedrick, Edward J; Viamonte, Carlo
2017-03-20
The need for algorithms that capture subject-specific abnormalities (SSA) in neuroimaging data is increasingly recognized across many neuropsychiatric disorders. However, the effects of initial distributional properties (e.g., normal versus non-normally distributed data), sample size, and typical preprocessing steps (spatial normalization, blurring kernel and minimal cluster requirements) on SSA remain poorly understood. The current study evaluated the performance of several commonly used z-transform algorithms [leave-one-out (LOO); independent sample (IDS); Enhanced Z-score Microstructural Assessment of Pathology (EZ-MAP); distribution-corrected z-scores (DisCo-Z); and robust z-scores (ROB-Z)] for identifying SSA using simulated and diffusion tensor imaging data from healthy controls (N = 50). Results indicated that all methods (LOO, IDS, EZ-MAP and DisCo-Z) with the exception of the ROB-Z eliminated spurious differences that are present across artificially created groups following a standard z-transform. However, LOO and IDS consistently overestimated the true number of extrema (i.e., SSA) across all sample sizes and distributions. The EZ-MAP and DisCo-Z algorithms more accurately estimated extrema across most distributions and sample sizes, with the exception of skewed distributions. DTI results indicated that registration algorithm (linear versus non-linear) and blurring kernel size differentially affected the number of extrema in positive versus negative tails. Increasing the blurring kernel size increased the number of extrema, although this effect was much more prominent when a minimum cluster volume was applied to the data. In summary, current results highlight the need to statistically compare the frequency of SSA in control samples or to develop appropriate confidence intervals for patient data.
High-Speed General Purpose Genetic Algorithm Processor.
Hoseini Alinodehi, Seyed Pourya; Moshfe, Sajjad; Saber Zaeimian, Masoumeh; Khoei, Abdollah; Hadidi, Khairollah
2016-07-01
In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously.
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
Robust Optimization Design Algorithm for High-Frequency TWTs
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Chevalier, Christine T.
2010-01-01
Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.
High performance computational chemistry: Towards fully distributed parallel algorithms
Guest, M.F.; Apra, E.; Bernholdt, D.E.
1994-07-01
An account is given of work in progress within the High Performance Computational Chemistry Group (HPCC) at the Pacific Northwest Laboratory (PNL) to develop molecular modeling software applications for massively parallel processors (MPPs). A discussion of the issues in developing scalable parallel algorithms is presented, with a particular focus on the distribution, as opposed to the replication, of key data structures. Replication of large data structures limits the maximum calculation size by imposing a low ratio of processors to memory. Only applications that distribute both data and computation across processors are truly scalable. The use of shared data structures, which may be independently accessed by each process even in a distributed-memory environment, greatly simplifies development and provides a significant performance enhancement. In describing tools to support this programming paradigm, an outline is given of the implementation and performance of a highly efficient and scalable algorithm to perform quadratically convergent, self-consistent field calculations on molecular systems. A brief account is given of the development of corresponding MPP capabilities in the areas of periodic Hartree Fock, Moeller-Plesset perturbation theory (MP2), density functional theory, and molecular dynamics. Performance figures are presented using both the Intel Touchstone Delta and Kendall Square Research KSR-2 supercomputers.
Site-specific range uncertainties caused by dose calculation algorithms for proton therapy
NASA Astrophysics Data System (ADS)
Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.
2014-08-01
The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be
High flux isotope reactor technical specifications
Not Available
1982-04-01
Technical specifications are presented concerning safety limits and limiting safety system settings; limiting conditions for operation; surveillance requirements; design features; administrative controls; and accidents and anticipated transients.
Algorithms for High-Speed Noninvasive Eye-Tracking System
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Morookian, John-Michael; Lambert, James
2010-01-01
Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea
High specific energy, high capacity nickel-hydrogen cell design
NASA Technical Reports Server (NTRS)
Wheeler, James R.
1993-01-01
A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell was designed and tested to deliver high capacity at steady discharge rates up to and including a C rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet of any type in a 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters and performance are described. Also covered is an episode of capacity fading due to electrode swelling and its successful recovery by means of additional activation procedures.
Site-specific range uncertainties caused by dose calculation algorithms for proton therapy
Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.
2014-01-01
The purpose of this study was to investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for 7 disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head & neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and Monte Carlo algorithms to obtain the average range differences (ARD) and root mean square deviation (RMSD) for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation (ADD) of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing Monte Carlo dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head & neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be needed for breast, lung and head & neck treatments. We conclude that currently used generic range uncertainty margins in proton therapy should be redefined site specific and that complex geometries may require a field specific
EpiTracer - an algorithm for identifying epicenters in condition-specific biological networks.
Sambaturu, Narmada; Mishra, Madhulika; Chandra, Nagasuma
2016-08-18
In biological systems, diseases are caused by small perturbations in a complex network of interactions between proteins. Perturbations typically affect only a small number of proteins, which go on to disturb a larger part of the network. To counteract this, a stress-response is launched, resulting in a complex pattern of variations in the cell. Identifying the key players involved in either spreading the perturbation or responding to it can give us important insights. We develop an algorithm, EpiTracer, which identifies the key proteins, or epicenters, from which a large number of changes in the protein-protein interaction (PPI) network ripple out. We propose a new centrality measure, ripple centrality, which measures how effectively a change at a particular node can ripple across the network by identifying highest activity paths specific to the condition of interest, obtained by mapping gene expression profiles to the PPI network. We demonstrate the algorithm using an overexpression study and a knockdown study. In the overexpression study, the gene that was overexpressed (PARK2) was highlighted as the most important epicenter specific to the perturbation. The other top-ranked epicenters were involved in either supporting the activity of PARK2, or counteracting it. Also, 5 of the identified epicenters showed no significant differential expression, showing that our method can find information which simple differential expression analysis cannot. In the second dataset (SP1 knockdown), alternative regulators of SP1 targets were highlighted as epicenters. Also, the gene that was knocked down (SP1) was picked up as an epicenter specific to the control condition. Sensitivity analysis showed that the genes identified as epicenters remain largely unaffected by small changes. We develop an algorithm, EpiTracer, to find epicenters in condition-specific biological networks, given the PPI network and gene expression levels. EpiTracer includes programs which can extract the
Mukherjee, Kaushik; Gupta, Sanjay
2017-03-01
Several mechanobiology algorithms have been employed to simulate bone ingrowth around porous coated implants. However, there is a scarcity of quantitative comparison between the efficacies of commonly used mechanoregulatory algorithms. The objectives of this study are: (1) to predict peri-acetabular bone ingrowth using cell-phenotype specific algorithm and to compare these predictions with those obtained using phenomenological algorithm and (2) to investigate the influences of cellular parameters on bone ingrowth. The variation in host bone material property and interfacial micromotion of the implanted pelvis were mapped onto the microscale model of implant-bone interface. An overall variation of 17-88 % in peri-acetabular bone ingrowth was observed. Despite differences in predicted tissue differentiation patterns during the initial period, both the algorithms predicted similar spatial distribution of neo-tissue layer, after attainment of equilibrium. Results indicated that phenomenological algorithm, being computationally faster than the cell-phenotype specific algorithm, might be used to predict peri-prosthetic bone ingrowth. The cell-phenotype specific algorithm, however, was found to be useful in numerically investigating the influence of alterations in cellular activities on bone ingrowth, owing to biologically related factors. Amongst the host of cellular activities, matrix production rate of bone tissue was found to have predominant influence on peri-acetabular bone ingrowth.
NASA Astrophysics Data System (ADS)
Palamara, Simone; Vergara, Christian; Faggiano, Elena; Nobile, Fabio
2015-02-01
The Purkinje network is responsible for the fast and coordinated distribution of the electrical impulse in the ventricle that triggers its contraction. Therefore, it is necessary to model its presence to obtain an accurate patient-specific model of the ventricular electrical activation. In this paper, we present an efficient algorithm for the generation of a patient-specific Purkinje network, driven by measures of the electrical activation acquired on the endocardium. The proposed method provides a correction of an initial network, generated by means of a fractal law, and it is based on the solution of Eikonal problems both in the muscle and in the Purkinje network. We present several numerical results both in an ideal geometry with synthetic data and in a real geometry with patient-specific clinical measures. These results highlight an improvement of the accuracy provided by the patient-specific Purkinje network with respect to the initial one. In particular, a cross-validation test shows an accuracy increase of 19% when only the 3% of the total points are used to generate the network, whereas an increment of 44% is observed when a random noise equal to 20% of the maximum value of the clinical data is added to the measures.
Measuring Specific Heats at High Temperatures
NASA Technical Reports Server (NTRS)
Vandersande, Jan W.; Zoltan, Andrew; Wood, Charles
1987-01-01
Flash apparatus for measuring thermal diffusivities at temperatures from 300 to 1,000 degrees C modified; measures specific heats of samples to accuracy of 4 to 5 percent. Specific heat and thermal diffusivity of sample measured. Xenon flash emits pulse of radiation, absorbed by sputtered graphite coating on sample. Sample temperature measured with thermocouple, and temperature rise due to pulse measured by InSb detector.
Trajectories for High Specific Impulse High Specific Power Deep Space Exploration
NASA Technical Reports Server (NTRS)
Polsgrove, T.; Adams, R. B.; Brady, Hugh J. (Technical Monitor)
2002-01-01
Preliminary results are presented for two methods to approximate the mission performance of high specific impulse high specific power vehicles. The first method is based on an analytical approximation derived by Williams and Shepherd and can be used to approximate mission performance to outer planets and interstellar space. The second method is based on a parametric analysis of trajectories created using the well known trajectory optimization code, VARITOP. This parametric analysis allows the reader to approximate payload ratios and optimal power requirements for both one-way and round-trip missions. While this second method only addresses missions to and from Jupiter, future work will encompass all of the outer planet destinations and some interstellar precursor missions.
Current Capabilities of High-Resolution Aerosol Retrievals: Algorithm MAIAC
NASA Astrophysics Data System (ADS)
Lyapustin, Alexei; Wang, Yujie
2014-05-01
Multi-Angle Implementation of Atmospheric Correction (MAIAC) is a new generation algorithm which uses time series analysis and processing of groups of pixels for advanced cloud masking and retrieval of aerosol and surface bidirectional reflectance properties. MAIAC makes aerosol retrievals from MODIS data at high 1km resolution providing information about the fine scale aerosol variability. This information is required in different applications such as urban air quality analysis, aerosol source identification etc. We will describe the latest improvements in MAIAC and the new analysis of retrieval uncertainties over dark and bright surfaces. We will also give overview of available MAIAC datasets and it validation for different AERONET DRAGON field campaigns which present a unique spatially distributed array of in-situ aerosol measurements.
Algorithms for a very high speed universal noiseless coding module
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Yeh, Pen-Shu
1991-01-01
The algorithmic definitions and performance characterizations are presented for a high performance adaptive coding module. Operation of at least one of these (single chip) implementations is expected to exceed 500 Mbits/s under laboratory conditions. Operation of a companion decoding module should operate at up to half the coder's rate. The module incorporates a powerful noiseless coder for Standard Form Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers where the smaller integers are more likely than the larger ones). Performance close to data entropies can be expected over a Dynamic Range of from 1.5 to 12 to 14 bits/sample (depending on the implementation).
Cryptanalyzing a chaotic encryption algorithm for highly autocorrelated data
NASA Astrophysics Data System (ADS)
Li, Ming; Liu, Shangwang; Niu, Liping; Liu, Hong
2016-12-01
Recently, a chaotic encryption algorithm for highly autocorrelated data was proposed. By adding chaotic diffusion to the former work, the information leakage of the encryption results especially for the images with lower gray scales was eliminated, and both higher-level security and fast encryption time were achieved. In this study, we analyze the security weakness of this scheme. By applying the ciphertext-only attack, the encrypted image can be restored into the substituted image except for the first block; and then, by using the chosen-plaintext attack, the S-boxes, the distribution map, and the block of chaotic map values, can all be revealed, and the encrypted image can be completely cracked. The improvement is also proposed. Experimental results verify our assertion.
A field size specific backscatter correction algorithm for accurate EPID dosimetry.
Berry, Sean L; Polvorosa, Cynthia S; Wuu, Cheng-Shie
2010-06-01
Portal dose images acquired with an amorphous silicon electronic portal imaging device (EPID) suffer from artifacts related to backscattered radiation. The backscatter signal varies as a function of field size (FS) and location on the EPID. Most current portal dosimetry algorithms fail to account for the FS dependence. The ramifications of this omission are investigated and solutions for correcting the measured dose images for FS specific backscatter are proposed. A series of open field dose images were obtained for field sizes ranging from 2×2 to 30×40cm2. Each image was analyzed to determine the amount of backscatter present. Two methods to account for the relationship between FS and backscatter are offered. These include the use of discrete FS specific correction matrices and the use of a single generalized equation. The efficacy of each approach was tested on the clinical dosimetric images for ten patients, 49 treatment fields. The fields were evaluated to determine whether there was an improvement in the dosimetric result over the commercial vendor's current algorithm. It was found that backscatter manifests itself as an asymmetry in the measured signal primarily in the inplane direction. The maximum error is approximately 3.6% for 10×10 and 12.5×12.5cm2 field sizes. The asymmetry decreased with increasing FS to approximately 0.6% for fields larger than 30×30cm2. The dosimetric comparison between the measured and predicted dose images was significantly improved (p⪡.001) when a FS specific backscatter correction was applied. The average percentage of points passing a 2%, 2 mm gamma criteria increased from 90.6% to between 96.7% and 97.2% after the proposed methods were employed. The error observed in a measured portal dose image depends on how much its FS differs from the 30×40cm2 calibration conditions. The proposed methods for correcting for FS specific backscatter effectively improved the ability of the EPID to perform dosimetric measurements
Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard
2005-08-01
MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.
Li, Ning; Cürüklü, Baran; Bastos, Joaquim; Sucasas, Victor; Fernandez, Jose Antonio Sanchez; Rodriguez, Jonathan
2017-05-04
The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV's parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power
Li, Ning; Cürüklü, Baran; Bastos, Joaquim; Sucasas, Victor; Fernandez, Jose Antonio Sanchez; Rodriguez, Jonathan
2017-01-01
The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV’s parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power
Koblavi-Dème, Stéphania; Maurice, Chantal; Yavo, Daniel; Sibailly, Toussaint S.; N′guessan, Kabran; Kamelan-Tano, Yvonne; Wiktor, Stefan Z.; Roels, Thierry H.; Chorba, Terence; Nkengasong, John N.
2001-01-01
To evaluate serologic testing algorithms for human immunodeficiency virus (HIV) based on a combination of rapid assays among persons with HIV-1 (non-B subtypes) infection, HIV-2 infection, and HIV-1–HIV-2 dual infections in Abidjan, Ivory Coast, a total of 1,216 sera with known HIV serologic status were used to evaluate the sensitivity and specificity of four rapid assays: Determine HIV-1/2, Capillus HIV-1/HIV-2, HIV-SPOT, and Genie II HIV-1/HIV-2. Two serum panels obtained from patients recently infected with HIV-1 subtypes B and non-B were also included. Based on sensitivity and specificity, three of the four rapid assays were evaluated prospectively in parallel (serum samples tested by two simultaneous rapid assays) and serial (serum samples tested by two consecutive rapid assays) testing algorithms. All assays were 100% sensitive, and specificities ranged from 99.4 to 100%. In the prospective evaluation, both the parallel and serial algorithms were 100% sensitive and specific. Our results suggest that rapid assays have high sensitivity and specificity and, when used in parallel or serial testing algorithms, yield results similar to those of enzyme-linked immunosorbent assay-based testing strategies. HIV serodiagnosis based on rapid assays may be a valuable alternative in implementing HIV prevention and surveillance programs in areas where sophisticated laboratories are difficult to establish. PMID:11325995
Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy
NASA Astrophysics Data System (ADS)
Yang, Yu; Dong, Bin; Wen, Zaiwen
2017-02-01
In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.
A Hybrid Feature Subset Selection Algorithm for Analysis of High Correlation Proteomic Data
Kordy, Hussain Montazery; Baygi, Mohammad Hossein Miran; Moradi, Mohammad Hassan
2012-01-01
Pathological changes within an organ can be reflected as proteomic patterns in biological fluids such as plasma, serum, and urine. The surface-enhanced laser desorption and ionization time-of-flight mass spectrometry (SELDI-TOF MS) has been used to generate proteomic profiles from biological fluids. Mass spectrometry yields redundant noisy data that the most data points are irrelevant features for differentiating between cancer and normal cases. In this paper, we have proposed a hybrid feature subset selection algorithm based on maximum-discrimination and minimum-correlation coupled with peak scoring criteria. Our algorithm has been applied to two independent SELDI-TOF MS datasets of ovarian cancer obtained from the NCI-FDA clinical proteomics databank. The proposed algorithm has used to extract a set of proteins as potential biomarkers in each dataset. We applied the linear discriminate analysis to identify the important biomarkers. The selected biomarkers have been able to successfully diagnose the ovarian cancer patients from the noncancer control group with an accuracy of 100%, a sensitivity of 100%, and a specificity of 100% in the two datasets. The hybrid algorithm has the advantage that increases reproducibility of selected biomarkers and able to find a small set of proteins with high discrimination power. PMID:23717808
High specific activity platinum-195m
Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.
2004-10-12
A new composition of matter includes .sup.195m Pt characterized by a specific activity of at least 30 mCi/mg Pt, generally made by method that includes the steps of: exposing .sup.193 Ir to a flux of neutrons sufficient to convert a portion of the .sup.193 Ir to .sup.195m Pt to form an irradiated material; dissolving the irradiated material to form an intermediate solution comprising Ir and Pt; and separating the Pt from the Ir by cation exchange chromatography to produce .sup.195m Pt.
Algorithms for high-speed universal noiseless coding
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Yeh, Pen-Shu; Miller, Warner
1993-01-01
This paper provides the basic algorithmic definitions and performance characterizations for a high-performance adaptive noiseless (lossless) 'coding module' which is currently under separate developments as single-chip microelectronic circuits at two NASA centers. Laboratory tests of one of these implementations recently demonstrated coding rates of up to 900 Mbits/s. Operation of a companion 'decoding module' can operate at up to half the coder's rate. The functionality provided by these modules should be applicable to most of NASA's science data. The hardware modules incorporate a powerful adaptive noiseless coder for 'standard form' data sources (i.e., sources whose symbols can be represented by uncorrelated nonnegative integers where the smaller integers are more likely than the larger ones). Performance close to data entries can be expected over a 'dynamic range' of from 1.5 to 12-15 bits/sample (depending on the implementation). This is accomplished by adaptively choosing the best of many Huffman equivalent codes to use on each block of 1-16 samples. Because of the extreme simplicity of these codes no table lookups are actually required in an implementation, thus leading to the expected very high data rate capabilities already noted.
Fast index based algorithms and software for matching position specific scoring matrices
Beckstette, Michael; Homann, Robert; Giegerich, Robert; Kurtz, Stefan
2006-01-01
Background In biological sequence analysis, position specific scoring matrices (PSSMs) are widely used to represent sequence motifs in nucleotide as well as amino acid sequences. Searching with PSSMs in complete genomes or large sequence databases is a common, but computationally expensive task. Results We present a new non-heuristic algorithm, called ESAsearch, to efficiently find matches of PSSMs in large databases. Our approach preprocesses the search space, e.g., a complete genome or a set of protein sequences, and builds an enhanced suffix array that is stored on file. This allows the searching of a database with a PSSM in sublinear expected time. Since ESAsearch benefits from small alphabets, we present a variant operating on sequences recoded according to a reduced alphabet. We also address the problem of non-comparable PSSM-scores by developing a method which allows the efficient computation of a matrix similarity threshold for a PSSM, given an E-value or a p-value. Our method is based on dynamic programming and, in contrast to other methods, it employs lazy evaluation of the dynamic programming matrix. We evaluated algorithm ESAsearch with nucleotide PSSMs and with amino acid PSSMs. Compared to the best previous methods, ESAsearch shows speedups of a factor between 17 and 275 for nucleotide PSSMs, and speedups up to factor 1.8 for amino acid PSSMs. Comparisons with the most widely used programs even show speedups by a factor of at least 3.8. Alphabet reduction yields an additional speedup factor of 2 on amino acid sequences compared to results achieved with the 20 symbol standard alphabet. The lazy evaluation method is also much faster than previous methods, with speedups of a factor between 3 and 330. Conclusion Our analysis of ESAsearch reveals sublinear runtime in the expected case, and linear runtime in the worst case for sequences not shorter than |A MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2y
Clinical algorithm for malaria during low and high transmission seasons
Muhe, L.; Oljira, B.; Degefu, H.; Enquesellassie, F.; Weber, M.
1999-01-01
OBJECTIVES—To assess the proportion of children with febrile disease who suffer from malaria and to identify clinical signs and symptoms that predict malaria during low and high transmission seasons. STUDY DESIGN—2490 children aged 2 to 59 months presenting to a health centre in rural Ethiopia with fever had their history documented and the following investigations: clinical examination, diagnosis, haemoglobin measurement, and a blood smear for malaria parasites. Clinical findings were related to the presence of malaria parasitaemia. RESULTS—Malaria contributed to 5.9% of all febrile cases from January to April and to 30.3% during the rest of the year. Prediction of malaria was improved by simple combinations of a few signs and symptoms. Fever with a history of previous malarial attack or absence of cough or a finding of pallor gave a sensitivity of 83% in the high risk season and 75% in the low risk season, with corresponding specificities of 51% and 60%; fever with a previous malaria attack or pallor or splenomegaly had sensitivities of 80% and 69% and specificities of 65% and 81% in high and low risk settings, respectively. CONCLUSION—Better clinical definitions are possible for low malaria settings when microscopic examination cannot be done. Health workers should be trained to detect pallor and splenomegaly because these two signs improve the specificity for malaria. PMID:10451393
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
ERIC Educational Resources Information Center
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
ERIC Educational Resources Information Center
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
Blakey, Tara; Melesse, Assefa; Sukop, Michael C.; Tachiev, Georgio; Whitman, Dean; Miralles-Wilhelm, Fernando
2016-01-01
This study evaluated the ability to improve Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) chl-a retrieval from optically shallow coastal waters by applying algorithms specific to the pixels’ benthic class. The form of the Ocean Color (OC) algorithm was assumed for this study. The operational atmospheric correction producing Level 2 SeaWiFS data was retained since the focus of this study was on establishing the benefit from the alternative specification of the bio-optical algorithm. Benthic class was determined through satellite image-based classification methods. Accuracy of the chl-a algorithms evaluated was determined through comparison with coincident in situ measurements of chl-a. The regionally-tuned models that were allowed to vary by benthic class produced more accurate estimates of chl-a than the single, unified regionally-tuned model. Mean absolute percent difference was approximately 70% for the regionally-tuned, benthic class-specific algorithms. Evaluation of the residuals indicated the potential for further improvement to chl-a estimation through finer characterization of benthic environments. Atmospheric correction procedures specialized to coastal environments were recognized as areas for future improvement as these procedures would improve both classification and algorithm tuning. PMID:27775626
High Density Jet Fuel Supply and Specifications
1986-01-01
same shortcomings. Perhaps different LAK blends using heavy reformate or heavy cat cracker naphtha (both high in aromatics and isoparaffins) could... catalytic cracking (FCC) process. Subsequent investigations funded by the U. S. Air Force concentrated on producing a similar fuel from the...cut (19% overhead) and adding heavy naphtha (320-440F) from a nearby paraffinic crude (40"API Wyoming Sweet) an excellent JP-8X can be created. Table 5
Heuristic-based scheduling algorithm for high level synthesis
NASA Technical Reports Server (NTRS)
Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye
1992-01-01
A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
NASA Astrophysics Data System (ADS)
Ling, J.; Templeton, J.
2015-08-01
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. Feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
A correction to a highly accurate voight function algorithm
NASA Technical Reports Server (NTRS)
Shippony, Z.; Read, W. G.
2002-01-01
An algorithm for rapidly computing the complex Voigt function was published by Shippony and Read. Its claimed accuracy was 1 part in 10^8. It was brought to our attention by Wells that Shippony and Read was not meeting its claimed accuracy for extremely small but non zero y values. Although true, the fix to the code is so trivial to warrant this note for those who use this algorithm.
NASA Astrophysics Data System (ADS)
An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu
2016-07-01
This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.
Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.
NASA Astrophysics Data System (ADS)
Elliott, William Dewey
1995-01-01
A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over
NASA Technical Reports Server (NTRS)
Pagnutti, Mary
2006-01-01
This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.
A high-performance FFT algorithm for vector supercomputers
NASA Technical Reports Server (NTRS)
Bailey, David H.
1988-01-01
Many traditional algorithms for computing the fast Fourier transform (FFT) on conventional computers are unacceptable for advanced vector and parallel computers because they involve nonunit, power-of-two memory strides. A practical technique for computing the FFT that avoids all such strides and appears to be near-optimal for a variety of current vector and parallel computers is presented. Performance results of a program based on this technique are given. Notable among these results is that a FORTRAN implementation of this algorithm on the CRAY-2 runs up to 77-percent faster than Cray's assembly-coded library routine.
High-Performance Algorithm for Solving the Diagnosis Problem
NASA Technical Reports Server (NTRS)
Fijany, Amir; Vatan, Farrokh
2009-01-01
An improved method of model-based diagnosis of a complex engineering system is embodied in an algorithm that involves considerably less computation than do prior such algorithms. This method and algorithm are based largely on developments reported in several NASA Tech Briefs articles: The Complexity of the Diagnosis Problem (NPO-30315), Vol. 26, No. 4 (April 2002), page 20; Fast Algorithms for Model-Based Diagnosis (NPO-30582), Vol. 29, No. 3 (March 2005), page 69; Two Methods of Efficient Solution of the Hitting-Set Problem (NPO-30584), Vol. 29, No. 3 (March 2005), page 73; and Efficient Model-Based Diagnosis Engine (NPO-40544), on the following page. Some background information from the cited articles is prerequisite to a meaningful summary of the innovative aspects of the present method and algorithm. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD. Diagnosis the task of finding faulty components is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. The calculation of a minimal diagnosis is inherently a hard problem, the solution of which requires amounts of computation time and memory that increase exponentially with the number of components of the engineering system. Among the developments to reduce the computational burden, as reported in the cited articles, is the mapping of the diagnosis problem onto the integer-programming (IP) problem. This mapping makes it possible to utilize a variety of algorithms developed previously
Marcello, Javier; Eugenio, Francisco; Perdomo, Ulises; Medina, Anabella
2016-01-01
The precise mapping of vegetation covers in semi-arid areas is a complex task as this type of environment consists of sparse vegetation mainly composed of small shrubs. The launch of high resolution satellites, with additional spectral bands and the ability to alter the viewing angle, offers a useful technology to focus on this objective. In this context, atmospheric correction is a fundamental step in the pre-processing of such remote sensing imagery and, consequently, different algorithms have been developed for this purpose over the years. They are commonly categorized as imaged-based methods as well as in more advanced physical models based on the radiative transfer theory. Despite the relevance of this topic, a few comparative studies covering several methods have been carried out using high resolution data or which are specifically applied to vegetation covers. In this work, the performance of five representative atmospheric correction algorithms (DOS, QUAC, FLAASH, ATCOR and 6S) has been assessed, using high resolution Worldview-2 imagery and field spectroradiometer data collected simultaneously, with the goal of identifying the most appropriate techniques. The study also included a detailed analysis of the parameterization influence on the final results of the correction, the aerosol model and its optical thickness being important parameters to be properly adjusted. The effects of corrections were studied in vegetation and soil sites belonging to different protected semi-arid ecosystems (high mountain and coastal areas). In summary, the superior performance of model-based algorithms, 6S in particular, has been demonstrated, achieving reflectance estimations very close to the in-situ measurements (RMSE of between 2% and 3%). Finally, an example of the importance of the atmospheric correction in the vegetation estimation in these natural areas is presented, allowing the robust mapping of species and the analysis of multitemporal variations related to the
One high-accuracy camera calibration algorithm based on computer vision images
NASA Astrophysics Data System (ADS)
Wang, Ying; Huang, Jianming; Wei, Xiangquan
2015-12-01
Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.
Stride search: A general algorithm for storm detection in high resolution climate data
Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda
2015-09-08
This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.
Stride search: A general algorithm for storm detection in high resolution climate data
Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; ...
2015-09-08
This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
A new adaptive GMRES algorithm for achieving high accuracy
Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.
1996-12-31
GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.
High School Educational Specifications: Facilities Planning Standards. Edition I.
ERIC Educational Resources Information Center
Jefferson County School District R-1, Denver, CO.
The Jefferson County School District (Colorado) has developed a manual of high school specifications for Design Advisory Groups and consultants to use for planning and designing the district's high school facilities. The specifications are provided to help build facilities that best meet the educational needs of the students to be served.…
A high-speed algorithm for computation of fractional differentiation and fractional integration.
Fukunaga, Masataka; Shimizu, Nobuyuki
2013-05-13
A high-speed algorithm for computing fractional differentiations and fractional integrations in fractional differential equations is proposed. In this algorithm, the stored data are not the function to be differentiated or integrated but the weighted integrals of the function. The intervals of integration for the memory can be increased without loss of accuracy as the computing time-step n increases. The computing cost varies as n log n, as opposed to n(2) of standard algorithms.
Nikolova, Mila; Steidl, Gabriele
2014-07-16
Color image enhancement is a complex and challenging task in digital imaging with abundant applications. Preserving the hue of the input image is crucial in a wide range of situations. We propose simple image enhancement algorithms which conserve the hue and preserve the range (gamut) of the R, G, B channels in an optimal way. In our setup, the intensity input image is transformed into a target intensity image whose histogram matches a specified, well-behaved histogram. We derive a new color assignment methodology where the resulting enhanced image fits the target intensity image. We analyse the obtained algorithms in terms of chromaticity improvement and compare them with the unique and quite popular histogram based hue and range preserving algorithm of Naik and Murthy. Numerical tests confirm our theoretical results and show that our algorithms perform much better than the Naik-Murthy algorithm. In spite of their simplicity, they compete with well-established alternative methods for images where hue-preservation is desired.
ASYMPTOTICALLY OPTIMAL HIGH-ORDER ACCURATE ALGORITHMS FOR THE SOLUTION OF CERTAIN ELLIPTIC PDEs
Leonid Kunyansky, PhD
2008-11-26
The main goal of the project, "Asymptotically Optimal, High-Order Accurate Algorithms for the Solution of Certain Elliptic PDE's" (DE-FG02-03ER25577) was to develop fast, high-order algorithms for the solution of scattering problems and spectral problems of photonic crystals theory. The results we obtained lie in three areas: (1) asymptotically fast, high-order algorithms for the solution of eigenvalue problems of photonics, (2) fast, high-order algorithms for the solution of acoustic and electromagnetic scattering problems in the inhomogeneous media, and (3) inversion formulas and fast algorithms for the inverse source problem for the acoustic wave equation, with applications to thermo- and opto- acoustic tomography.
Xu, Lei; Fan, Zhanming; Liang, Junfu; Yan, Zixu; Sun, Zhonghua
2017-01-01
The aim of this study was to evaluate the workflow efficiency of a new automatic coronary-specific reconstruction technique (Smart Phase, GE Healthcare—SP) for selection of the best cardiac phase with least coronary motion when compared with expert manual selection (MS) of best phase in patients with high heart rate. A total of 46 patients with heart rates above 75 bpm who underwent single beat coronary computed tomography angiography (CCTA) were enrolled in this study. CCTA of all subjects were performed on a 256-detector row CT scanner (Revolution CT, GE Healthcare, Waukesha, Wisconsin, US). With the SP technique, the acquired phase range was automatically searched in 2% phase intervals during the reconstruction process to determine the optimal phase for coronary assessment, while for routine expert MS, reconstructions were performed at 5% intervals and a best phase was manually determined. The reconstruction and review times were recorded to measure the workflow efficiency for each method. Two reviewers subjectively assessed image quality for each coronary artery in the MS and SP reconstruction volumes using a 4-point grading scale. The average HR of the enrolled patients was 91.1±19.0bpm. A total of 204 vessels were assessed. The subjective image quality using SP was comparable to that of the MS, 1.45±0.85 vs 1.43±0.81 respectively (p = 0.88). The average time was 246 seconds for the manual best phase selection, and 98 seconds for the SP selection, resulting in average time saving of 148 seconds (60%) with use of the SP algorithm. The coronary specific automatic cardiac best phase selection technique (Smart Phase) improves clinical workflow in high heart rate patients and provides image quality comparable with manual cardiac best phase selection. Reconstruction of single-beat CCTA exams with SP can benefit the users with less experienced in CCTA image interpretation. PMID:28231322
Wang, Hui; Xu, Lei; Fan, Zhanming; Liang, Junfu; Yan, Zixu; Sun, Zhonghua
2017-01-01
The aim of this study was to evaluate the workflow efficiency of a new automatic coronary-specific reconstruction technique (Smart Phase, GE Healthcare-SP) for selection of the best cardiac phase with least coronary motion when compared with expert manual selection (MS) of best phase in patients with high heart rate. A total of 46 patients with heart rates above 75 bpm who underwent single beat coronary computed tomography angiography (CCTA) were enrolled in this study. CCTA of all subjects were performed on a 256-detector row CT scanner (Revolution CT, GE Healthcare, Waukesha, Wisconsin, US). With the SP technique, the acquired phase range was automatically searched in 2% phase intervals during the reconstruction process to determine the optimal phase for coronary assessment, while for routine expert MS, reconstructions were performed at 5% intervals and a best phase was manually determined. The reconstruction and review times were recorded to measure the workflow efficiency for each method. Two reviewers subjectively assessed image quality for each coronary artery in the MS and SP reconstruction volumes using a 4-point grading scale. The average HR of the enrolled patients was 91.1±19.0bpm. A total of 204 vessels were assessed. The subjective image quality using SP was comparable to that of the MS, 1.45±0.85 vs 1.43±0.81 respectively (p = 0.88). The average time was 246 seconds for the manual best phase selection, and 98 seconds for the SP selection, resulting in average time saving of 148 seconds (60%) with use of the SP algorithm. The coronary specific automatic cardiac best phase selection technique (Smart Phase) improves clinical workflow in high heart rate patients and provides image quality comparable with manual cardiac best phase selection. Reconstruction of single-beat CCTA exams with SP can benefit the users with less experienced in CCTA image interpretation.
Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms.
Huang, Fang; Hartwich, Tobias M P; Rivera-Molina, Felix E; Lin, Yu; Duim, Whitney C; Long, Jane J; Uchil, Pradeep D; Myers, Jordan R; Baird, Michelle A; Mothes, Walther; Davidson, Michael W; Toomre, Derek; Bewersdorf, Joerg
2013-07-01
Newly developed scientific complementary metal-oxide semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition, enlarge the field of view and increase the effective quantum efficiency in single-molecule switching nanoscopy. However, sCMOS-intrinsic pixel-dependent readout noise substantially lowers the localization precision and introduces localization artifacts. We present algorithms that overcome these limitations and that provide unbiased, precise localization of single molecules at the theoretical limit. Using these in combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at rates of up to 32 reconstructed images per second in fixed and living cells.
NASA Technical Reports Server (NTRS)
Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.
1990-01-01
A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.
SU-E-T-305: Study of the Eclipse Electron Monte Carlo Algorithm for Patient Specific MU Calculations
Wang, X; Qi, S; Agazaryan, N; DeMarco, J
2014-06-01
Purpose: To evaluate the Eclipse electron Monte Carlo (eMC) algorithm based on patient specific monitor unit (MU) calculations, and to propose a new factor which quantitatively predicts the discrepancy of MUs between the eMC algorithm and hand calculations. Methods: Electron treatments were planned for 61 patients on Eclipse (Version 10.0) using the eMC algorithm for Varian TrueBeam linear accelerators. For each patient, the same treatment beam angle was kept for a point dose calculation at dmax performed with the reference condition, which used an open beam with a 15×15 cm2 size cone and 100 SSD. A patient specific correction factor (PCF) was obtained by getting the ratio between this point dose and the calibration dose, which is 1 cGy per MU delivered at dmax. The hand calculation results were corrected by the PCFs and compared with MUs from the treatment plans. Results: The MU from the treatment plans were in average (7.1±6.1)% higher than the hand calculations. The average MU difference between the corrected hand calculations and the eMC treatment plans was (0.07±3.48)%. A correlation coefficient of 0.8 was found between (1-PCF) and the percentage difference between the treatment plan and hand calculations. Most outliers were treatment plans with small beam opening (< 4 cm) and low energy beams (6 and 9 MeV). Conclusion: For CT-based patient treatment plans, the eMC algorithm tends to generate a larger MU than hand calculations. Caution should be taken for eMC patient plans with small field sizes and low energy beams. We hypothesize that the PCF ratio reflects the influence of patient surface curvature and tissue inhomogeneity to patient specific percent depth dose (PDD) curve and MU calculations in eMC algorithm.
Improvement of the prediction accuracy of a high speed algorithm.
NASA Astrophysics Data System (ADS)
Bojkov, V. F.; Makhonin, G. N.; Testov, A. V.; Khutorovskij, Z. N.; Shogin, A. N.
Methods to improve the predictions accuracy for two classes of orbits (e < 0.05 and e ≍ 0.7) taking into account in the algorithm disturbances from tesseral harmonics are presented. For the first class of orbits a speed of 0.004 sec for the computer "Elbrus" and an accuracy in positioning about 200 m is achieved by traditional expansion of the Hamiltonian function in the Fourier series along the mean anomaly. For the second class a more compact second expansion along the true anomaly of the Hamiltonian function is suggested.
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
GPU-Based Tracking Algorithms for the ATLAS High-Level Trigger
NASA Astrophysics Data System (ADS)
Emeliyanov, D.; Howard, J.
2012-12-01
Results on the performance and viability of data-parallel algorithms on Graphics Processing Units (GPUs) in the ATLAS Level 2 trigger system are presented. We describe the existing trigger data preparation and track reconstruction algorithms, motivation for their optimization, GPU-parallelized versions of these algorithms, and a “client-server” solution for hybrid CPU/GPU event processing used for integration of the GPU-oriented algorithms into existing ATLAS trigger software. The resulting speed-up of event processing times obtained with high-luminosity simulated data is presented and discussed.
High Throughput Light Absorber Discovery, Part 1: An Algorithm for Automated Tauc Analysis.
Suram, Santosh K; Newhouse, Paul F; Gregoire, John M
2016-11-14
High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe2O3, Cu2V2O7, and BiVO4. The applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.
High throughput light absorber discovery, Part 1: An algorithm for automated tauc analysis
Suram, Santosh K.; Newhouse, Paul F.; Gregoire, John M.
2016-09-23
High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe2O3, Cu2V2O7, and BiVO4. Here, the applicability of themore » algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.« less
Amarenco, Gérard; Chartier-Kastler, Emmanuel; Denys, Pierre; Jean, Jacques Labat; de Sèze, Marianne; Lubetzski, Catherine
2013-12-01
Urinary disorders that lead to urological complications are frequent in multiple sclerosis, resulting in diminished quality of life. Urinary management guidelines are scarce and targeted to neuro-urology specialists. This study aimed to construct and validate an algorithm dedicated to neurologists and general practitioners to facilitate first-line evaluation and treatment of urinary disorders associated with multiple sclerosis. 49 items concerning urological symptom evaluation and therapeutic strategies were derived from literature analysis and evaluated by an expert panel. The Delphi method established consensus between the experts and allowed development of the First-Line Urological Evaluation in Multiple Sclerosis (FLUE-MS) algorithm. Two questions from the Urinary Bothersome Questionnaire in Multiple Sclerosis were included and their validation to verify comprehensiveness and acceptability was also conducted. Three rounds of expert review obtained consensus of all 49 items and allowed finalisation of the algorithm. Comprehension and acceptability of two Urinary Bothersome Questionnaire in Multiple Sclerosis questions were verified (mean comprehensiveness score: 1.99/2 [99.7% total comprehensiveness], mean acceptability score: 1.99/2 [99.1% complete acceptability]). The FLUE-MS algorithm was designed for neurologists and general practitioners, enabling identification of 'red flags', timely patient referral to specialist neuro-urology units, and appropriate first-line therapy.
Grain detection from 2d and 3d EBSD data--specification of the MTEX algorithm.
Bachmann, Florian; Hielscher, Ralf; Schaeben, Helmut
2011-12-01
We present a fast and versatile algorithm for the reconstruction of the grain structure from 2d and 3d Electron Back Scatter Diffraction (EBSD) data. The algorithm is rigorously derived from the modeling assumption that grain boundaries are located at the bisectors of adjacent measurement locations. This modeling assumption immediately implies that grains are composed of Voronoi cells corresponding to the measurement locations. Thus our algorithm is based on the Voronoi decomposition of the 2d or 3d measurement domain. It applies to any geometrical configuration of measurement locations and allows for missing data due to measurement errors. The definition of grains as compositions of Voronoi cells implies another fundamental feature of the proposed algorithm--its invariance with respect to spatial displacements, i.e., rotations or shifts of the specimen. This paper also serves as a reference paper for the texture analysis software MTEX, which is a comprehensive and versatile, freely available MATLAB toolbox that covers a wide range of problems in quantitative texture analysis, including the analysis of EBSD data. Copyright © 2011 Elsevier B.V. All rights reserved.
A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case
Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038
A high-performance genetic algorithm: using traveling salesman problem as a case.
Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.
High- and low-level hierarchical classification algorithm based on source separation process
NASA Astrophysics Data System (ADS)
Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber
2016-10-01
High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance
Fenrich, Keith K; Zhao, Ethan Y; Wei, Yuan; Garg, Anirudh; Rose, P Ken
2014-04-15
Isolating specific cellular and tissue compartments from 3D image stacks for quantitative distribution analysis is crucial for understanding cellular and tissue physiology under normal and pathological conditions. Current approaches are limited because they are designed to map the distributions of synapses onto the dendrites of stained neurons and/or require specific proprietary software packages for their implementation. To overcome these obstacles, we developed algorithms to Grow and Shrink Volumes of Interest (GSVI) to isolate specific cellular and tissue compartments from 3D image stacks for quantitative analysis and incorporated these algorithms into a user-friendly computer program that is open source and downloadable at no cost. The GSVI algorithm was used to isolate perivascular regions in the cortex of live animals and cell membrane regions of stained spinal motoneurons in histological sections. We tracked the real-time, intravital biodistribution of injected fluorophores with sub-cellular resolution from the vascular lumen to the perivascular and parenchymal space following a vascular microlesion, and mapped the precise distributions of membrane-associated KCC2 and gephyrin immunolabeling in dendritic and somatic regions of spinal motoneurons. Compared to existing approaches, the GSVI approach is specifically designed for isolating perivascular regions and membrane-associated regions for quantitative analysis, is user-friendly, and free. The GSVI algorithm is useful to quantify regional differences of stained biomarkers (e.g., cell membrane-associated channels) in relation to cell functions, and the effects of therapeutic strategies on the redistributions of biomolecules, drugs, and cells in diseased or injured tissues. Copyright © 2014 Elsevier B.V. All rights reserved.
Yu, Amy Y X; Quan, Hude; McRae, Andrew; Wagner, Gabrielle O; Hill, Michael D; Coutts, Shelagh B
2017-09-18
Validation of administrative data case definitions is key for accurate passive surveillance of disease. Transient ischemic attack (TIA) is a condition primarily managed in the emergency department. However, prior validation studies have focused on data after inpatient hospitalization. We aimed to determine the validity of the Canadian 10th International Classification of Diseases (ICD-10-CA) codes for TIA in the national ambulatory administrative database. We performed a diagnostic accuracy study of four ICD-10-CA case definition algorithms for TIA in the emergency department setting. The study population was obtained from two ongoing studies on the diagnosis of TIA and minor stroke versus stroke mimic using serum biomarkers and neuroimaging. Two reference standards were used 1) the emergency department clinical diagnosis determined by chart abstractors and 2) the 90-day final diagnosis, both obtained by stroke neurologists, to calculate the sensitivity, specificity, positive and negative predictive values (PPV and NPV) of the ICD-10-CA algorithms for TIA. Among 417 patients, emergency department adjudication showed 163 (39.1%) TIA, 155 (37.2%) ischemic strokes, and 99 (23.7%) stroke mimics. The most restrictive algorithm, defined as a TIA code in the main position had the lowest sensitivity (36.8%), but highest specificity (92.5%) and PPV (76.0%). The most inclusive algorithm, defined as a TIA code in any position with and without query prefix had the highest sensitivity (63.8%), but lowest specificity (81.5%) and PPV (68.9%). Sensitivity, specificity, PPV, and NPV were overall lower when using the 90-day diagnosis as reference standard. Emergency department administrative data reflect diagnosis of suspected TIA with high specificity, but underestimate the burden of disease. Future studies are necessary to understand the reasons for the low to moderate sensitivity.
High precision tuning of state for memristive devices by adaptable variation-tolerant algorithm
NASA Astrophysics Data System (ADS)
Alibart, Fabien; Gao, Ligang; Hoskins, Brian D.; Strukov, Dmitri B.
2012-02-01
Using memristive properties common for titanium dioxide thin film devices, we designed a simple write algorithm to tune device conductance at a specific bias point to 1% relative accuracy (which is roughly equivalent to seven-bit precision) within its dynamic range even in the presence of large variations in switching behavior. The high precision state is nonvolatile and the results are likely to be sustained for nanoscale memristive devices because of the inherent filamentary nature of the resistive switching. The proposed functionality of memristive devices is especially attractive for analog computing with low precision data. As one representative example we demonstrate hybrid circuitry consisting of an integrated circuit summing amplifier and two memristive devices to perform the analog multiply-and-add (dot-product) computation, which is a typical bottleneck operation in information processing.
NASA Technical Reports Server (NTRS)
Williams, Craig Hamilton
1995-01-01
A simple, analytic approximation is derived to calculate trip time and performance for propulsion systems of very high specific impulse (50,000 to 200,000 seconds) and very high specific power (10 to 1000 kW/kg) for human interplanetary space missions. The approach assumed field-free space, constant thrust/constant specific power, and near straight line (radial) trajectories between the planets. Closed form, one dimensional equations of motion for two-burn rendezvous and four-burn round trip missions are derived as a function of specific impulse, specific power, and propellant mass ratio. The equations are coupled to an optimizing parameter that maximizes performance and minimizes trip time. Data generated for hypothetical one-way and round trip human missions to Jupiter were found to be within 1% and 6% accuracy of integrated solutions respectively, verifying that for these systems, credible analysis does not require computationally intensive numerical techniques.
Automated coronary artery calcium scoring from non-contrast CT using a patient-specific algorithm
NASA Astrophysics Data System (ADS)
Ding, Xiaowei; Slomka, Piotr J.; Diaz-Zamudio, Mariana; Germano, Guido; Berman, Daniel S.; Terzopoulos, Demetri; Dey, Damini
2015-03-01
Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.
Al-Rajab, Murad; Lu, Joan; Xu, Qiang
2017-07-01
This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.
Trajectories for High Specific Impulse High Specific Power Deep Space Exploration
NASA Technical Reports Server (NTRS)
Polsgrove, Tara; Adams, Robert B.; Brady, Hugh J. (Technical Monitor)
2002-01-01
Flight times and deliverable masses for electric and fusion propulsion systems are difficult to approximate. Numerical integration is required for these continuous thrust systems. Many scientists are not equipped with the tools and expertise to conduct interplanetary and interstellar trajectory analysis for their concepts. Several charts plotting the results of well-known trajectory simulation codes were developed and are contained in this paper. These charts illustrate the dependence of time of flight and payload ratio on jet power, initial mass, specific impulse and specific power. These charts are intended to be a tool by which people in the propulsion community can explore the possibilities of their propulsion system concepts. Trajectories were simulated using the tools VARITOP and IPOST. VARITOP is a well known trajectory optimization code that involves numerical integration based on calculus of variations. IPOST has several methods of trajectory simulation; the one used in this paper is Cowell's method for full integration of the equations of motion. An analytical method derived in the companion paper was also evaluated. The accuracy of this method is discussed in the paper.
A highly efficient multi-core algorithm for clustering extremely large datasets
2010-01-01
Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922
Fast Nonparametric Machine Learning Algorithms for High-Dimensional Massive Data and Applications
2006-03-01
Mapreduce : Simplified data processing on large clusters . In Symposium on Operating System Design and Implementation, 2004. 6.3.2 S. C. Deerwester, S. T...Fast Nonparametric Machine Learning Algorithms for High-dimensional Massive Data and Applications Ting Liu CMU-CS-06-124 March 2006 School of...4. TITLE AND SUBTITLE Fast Nonparametric Machine Learning Algorithms for High-dimensional Massive Data and Applications 5a. CONTRACT NUMBER 5b
Statistical classification techniques in high energy physics (SDDT algorithm)
NASA Astrophysics Data System (ADS)
Bouř, Petr; Kůs, Václav; Franc, Jiří
2016-08-01
We present our proposal of the supervised binary divergence decision tree with nested separation method based on the generalized linear models. A key insight we provide is the clustering driven only by a few selected physical variables. The proper selection consists of the variables achieving the maximal divergence measure between two different classes. Further, we apply our method to Monte Carlo simulations of physics processes corresponding to a data sample of top quark-antiquark pair candidate events in the lepton+jets decay channel. The data sample is produced in pp̅ collisions at √S = 1.96 TeV. It corresponds to an integrated luminosity of 9.7 fb-1 recorded with the D0 detector during Run II of the Fermilab Tevatron Collider. The efficiency of our algorithm achieves 90% AUC in separating signal from background. We also briefly deal with the modification of statistical tests applicable to weighted data sets in order to test homogeneity of the Monte Carlo simulations and measured data. The justification of these modified tests is proposed through the divergence tests.
NASA Astrophysics Data System (ADS)
Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.
2016-04-01
Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target detection is verified. The results of ANN proposes discriminating band range for targets; these ranges were further used to perform target detection using four popular spectral matching target detection algorithm. Further, the results of algorithms were analyzed using ROC curves to evaluate the effectiveness of the ranges suggested by ANN over full spectrum for detection of desired targets. In addition, comparative assessment of algorithms is also performed using ROC.
A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions
NASA Astrophysics Data System (ADS)
Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.
New Mexico High School Competency Examination. Domain Specifications.
ERIC Educational Resources Information Center
New Mexico State Dept. of Education, Santa Fe. Assessment and Evaluation Unit.
The State Department of Education is releasing the domain specifications of the New Mexico High School Competency Examination (NMHSCE) for educator use because the Spring 1996 administration of the test will be in a different form. The NMHSCE is a high school graduation requirement in New Mexico. It assesses students' competence in writing,…
NASA Astrophysics Data System (ADS)
Guan, Xiaowei; Guo, Lixin; Liu, Zhongyu
2015-10-01
A novel ray tracing algorithm for high-speed propagation prediction in multi-room indoor environments is proposed in this paper, whose theoretical foundations are geometrical optics (GO) and the uniform theory of diffraction(UTD). Taking the geometrical and electromagnetic information of the complex indoor scene into account, some acceleration techniques are adopted to raise the efficiency of the ray tracing algorithm. The simulation results indicate that the runtime of the ray tracing algorithm will sharply increase when the number of the objects in multi-room buildings is large enough. Therefore, GPU acceleration technology is used to solve that problem. Finally, a typical multi-room indoor environment with several objects in each room is simulated by using the serial ray tracing algorithm and the parallel one respectively. It can be found easily from the results that compared with the serial algorithm, the GPU-based one can achieve greater efficiency.
A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm
Chen, Jui-Le; Yang, Chu-Sing
2013-01-01
The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data
A Novel Image Compression Algorithm for High Resolution 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2014-06-01
This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.
Preparation of Labeled Aflatoxins with High Specific Activities
Hsieh, D. P. H.; Mateles, R. I.
1971-01-01
Resting cells of Aspergillus parasiticus ATCC 15517 were used to prepare highly labeled aflatoxins from labeled acetate. High synthetic activity in growing cells was evidenced only during 40 to 70 hr of incubation. Glucose was required for high incorporation efficiency, whereas the concentration of the labeled acetate determined the specific activity of the product. When labeled acetate was continuously added to maintain a concentration near but not exceeding 10 mm, in a culture containing 30 g of glucose per liter, 2% of its labels could be recovered in the purified aflatoxins which have a specific activity more than three times that of the labeled acetate. PMID:4329435
NASA Astrophysics Data System (ADS)
Popov, Pavel P.; Wang, Haifeng; Pope, Stephen B.
2015-08-01
We investigate the coupling between the two components of a Large Eddy Simulation/Probability Density Function (LES/PDF) algorithm for the simulation of turbulent reacting flows. In such an algorithm, the Large Eddy Simulation (LES) component provides a solution to the hydrodynamic equations, whereas the Lagrangian Monte Carlo Probability Density Function (PDF) component solves for the PDF of chemical compositions. Special attention is paid to the transfer of specific volume information from the PDF to the LES code: the specific volume field contains probabilistic noise due to the nature of the Monte Carlo PDF solution, and thus the use of the specific volume field in the LES pressure solver needs careful treatment. Using a test flow based on the Sandia/Sydney Bluff Body Flame, we determine the optimal strategy for specific volume feedback. Then, the overall second-order convergence of the entire LES/PDF procedure is verified using a simple vortex ring test case, with special attention being given to bias errors due to the number of particles per LES Finite Volume (FV) cell.
Algorithms and architectures for high performance analysis of semantic graphs.
Hendrickson, Bruce Alan
2005-09-01
analysis. Since intelligence datasets can be extremely large, the focus of this work is on the use of parallel computers. We have been working to develop scalable parallel algorithms that will be at the core of a semantic graph analysis infrastructure. Our work has involved two different thrusts, corresponding to two different computer architectures. The first architecture of interest is distributed memory, message passing computers. These machines are ubiquitous and affordable, but they are challenging targets for graph algorithms. Much of our distributed-memory work to date has been collaborative with researchers at Lawrence Livermore National Laboratory and has focused on finding short paths on distributed memory parallel machines. Our implementation on 32K processors of BlueGene/Light finds shortest paths between two specified vertices in just over a second for random graphs with 4 billion vertices.
McRae, Andrew D; Innes, Grant; Graham, Michelle; Lang, Eddy; Andruchow, James E; Yang, Hong; Ji, Yunqi; Vatanpour, Shabnam; Southern, Danielle A; Wang, Dongmei; Seiden-Long, Isolde; DeKoning, Lawrence; Kavsak, Peter
2017-08-01
Symptoms of acute coronary syndrome account for a large proportion of emergency department (ED) visits and hospitalizations. High-sensitivity troponin can rapidly rule out or rule in acute myocardial infarction (AMI) within a short time of ED arrival. We sought to validate test characteristics and classification performance of 2-hour high-sensitivity troponin T (hsTnT) algorithms for the rapid diagnosis of AMI. We included consecutive patients from 4 academic EDs with suspected cardiac chest pain who had hsTnT assays performed 2 hours apart (± 30 minutes) as part of routine care. The primary outcome was AMI at 7 days. Secondary outcomes included major adverse cardiac events (mortality, AMI, and revascularization). Test characteristics and classification performance for multiple 2-hour algorithms were quantified. Seven hundred twenty-two patients met inclusion criteria. Seven-day AMI incidence was 10.9% and major adverse cardiac event incidence was 13.7%. A 2-hour rule-out algorithm proposed by Reichlin and colleagues ruled out AMI in 59.4% of patients with 98.7% sensitivity and 99.8% negative predictive value (NPV). The 2-hour rule-out algorithm proposed by the United Kingdom National Institute for Health and Care Excellence ruled out AMI in 50.3% of patients with similar sensitivity and NPV. Other exploratory algorithms had similar sensitivity but marginally better classification performance. According to Reichlin et al., the 2-hour rule-in algorithm ruled in AMI in 16.5% of patients with 92.4% specificity and 58.5% positive predictive value. Two-hour hsTnT algorithms can rule out AMI with very high sensitivity and NPV. The algorithm developed by Reichlin et al. had superior classification performance. Reichlin and colleagues' 2-hour rule-in algorithm had poor positive predictive value and might not be suitable for early rule-in decision-making. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
ParaDock: a flexible non-specific DNA--rigid protein docking algorithm.
Banitt, Itamar; Wolfson, Haim J
2011-11-01
Accurate prediction of protein-DNA complexes could provide an important stepping stone towards a thorough comprehension of vital intracellular processes. Few attempts were made to tackle this issue, focusing on binding patch prediction, protein function classification and distance constraints-based docking. We introduce ParaDock: a novel ab initio protein-DNA docking algorithm. ParaDock combines short DNA fragments, which have been rigidly docked to the protein based on geometric complementarity, to create bent planar DNA molecules of arbitrary sequence. Our algorithm was tested on the bound and unbound targets of a protein-DNA benchmark comprised of 47 complexes. With neither addressing protein flexibility, nor applying any refinement procedure, CAPRI acceptable solutions were obtained among the 10 top ranked hypotheses in 83% of the bound complexes, and 70% of the unbound. Without requiring prior knowledge of DNA length and sequence, and within <2 h per target on a standard 2.0 GHz single processor CPU, ParaDock offers a fast ab initio docking solution.
NASA Astrophysics Data System (ADS)
Baran, I.; Kuhn, M.; Claessens, S. J.; Featherstone, W. E.; Holmes, S. A.; Vaníček, P.
2006-04-01
A synthetic [simulated] Earth gravity model (SEGM) of the geoid, gravity and topography has been constructed over Australia specifically for validating regional gravimetric geoid determination theories, techniques and computer software. This regional high-resolution (1-arc-min by 1-arc-min) Australian SEGM (AusSEGM) is a combined source and effect model. The long-wavelength effect part (up to and including spherical harmonic degree and order 360) is taken from an assumed errorless EGM96 global geopotential model. Using forward modelling via numerical Newtonian integration, the short-wavelength source part is computed from a high-resolution (3-arc-sec by 3-arc-sec) synthetic digital elevation model (SDEM), which is a fractal surface based on the GLOBE v1 DEM. All topographic masses are modelled with a constant mass-density of 2,670 kg/m3. Based on these input data, gravity values on the synthetic topography (on a grid and at arbitrarily distributed discrete points) and consistent geoidal heights at regular 1-arc-min geographical grid nodes have been computed. The precision of the synthetic gravity and geoid data (after a first iteration) is estimated to be better than 30 μ Gal and 3 mm, respectively, which reduces to 1 μ Gal and 1 mm after a second iteration. The second iteration accounts for the changes in the geoid due to the superposed synthetic topographic mass distribution. The first iteration of AusSEGM is compared with Australian gravity and GPS-levelling data to verify that it gives a realistic representation of the Earth’s gravity field. As a by-product of this comparison, AusSEGM gives further evidence of the north south-trending error in the Australian Height Datum. The freely available AusSEGM-derived gravity and SDEM data, included as Electronic Supplementary Material (ESM) with this paper, can be used to compute a geoid model that, if correct, will agree to in 3 mm with the AusSEGM geoidal heights, thus offering independent verification of theories
Advanced Tribological Coatings for High Specific Strength Alloys
1989-09-29
Hard Anodised 4 HSSA12 (SHT) Plasma Nitrided 1 HSSA13 (H&G) Plasma Nitrided 2 HSSA14 (SHT) High Temperature Nitrocarburized 1 HSSA15 (H&G) Nitrox 1...HSSA26 ( High Temperature Plasma Nitriding) has recently arrived, and is currently undergoing metallographic examination. The remaining samples are still...Report No 3789/607 Advanced Tribological Coatings For High Specific Strength Alloys, R&D 5876-MS-01 Contract DAJ A45-87-C-0044 5th Interim Report
Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena
2009-01-30
couple these elements using Finite- Volume-like surface Riemann solvers. This hybrid, dual-layer design allows DGTD to combine advantages from both of...electromagnetic waves , J. Comput. Phys., 114 (1994), pp. 185-200. [62] F. COLLINO, High order absorbing boundary conditions for wave propagation...formulation with high order absorbing boundary conditions for time-dependent waves , Comput. Meth. Appl. Mech., 195 (2006), pp. 3666-3690. [69] M. GUDDATI
Rana, Suresh B.
2013-01-01
Purpose: It is well known that photon beam radiation therapy requires dose calculation algorithms. The objective of this study was to measure and assess the ability of pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) to predict doses beyond high density heterogeneity. Materials and Methods: An inhomogeneous phantom of five layers was created in Eclipse planning system (version 8.6.15). Each layer of phantom was assigned in terms of water (first or top), air (second), water (third), bone (fourth), and water (fifth or bottom) medium. Depth doses in water (bottom medium) were calculated for 100 monitor units (MUs) with 6 Megavoltage (MV) photon beam for different field sizes using AAA and PBC with heterogeneity correction. Combinations of solid water, Poly Vinyl Chloride (PVC), and Styrofoam were then manufactured to mimic phantoms and doses for 100 MUs were acquired with cylindrical ionization chamber at selected depths beyond high density heterogeneity interface. The measured and calculated depth doses were then compared. Results: AAA's values had better agreement with measurements at all measured depths. Dose overestimation by AAA (up to 5.3%) and by PBC (up to 6.7%) was found to be higher in proximity to the high-density heterogeneity interface, and the dose discrepancies were more pronounced for larger field sizes. The errors in dose estimation by AAA and PBC may be due to improper beam modeling of primary beam attenuation or lateral scatter contributions or combination of both in heterogeneous media that include low and high density materials. Conclusions: AAA is more accurate than PBC for dose calculations in treating deep-seated tumor beyond high-density heterogeneity interface. PMID:24455541
Rana, Suresh B
2013-01-01
It is well known that photon beam radiation therapy requires dose calculation algorithms. The objective of this study was to measure and assess the ability of pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) to predict doses beyond high density heterogeneity. An inhomogeneous phantom of five layers was created in Eclipse planning system (version 8.6.15). Each layer of phantom was assigned in terms of water (first or top), air (second), water (third), bone (fourth), and water (fifth or bottom) medium. Depth doses in water (bottom medium) were calculated for 100 monitor units (MUs) with 6 Megavoltage (MV) photon beam for different field sizes using AAA and PBC with heterogeneity correction. Combinations of solid water, Poly Vinyl Chloride (PVC), and Styrofoam were then manufactured to mimic phantoms and doses for 100 MUs were acquired with cylindrical ionization chamber at selected depths beyond high density heterogeneity interface. The measured and calculated depth doses were then compared. AAA's values had better agreement with measurements at all measured depths. Dose overestimation by AAA (up to 5.3%) and by PBC (up to 6.7%) was found to be higher in proximity to the high-density heterogeneity interface, and the dose discrepancies were more pronounced for larger field sizes. The errors in dose estimation by AAA and PBC may be due to improper beam modeling of primary beam attenuation or lateral scatter contributions or combination of both in heterogeneous media that include low and high density materials. AAA is more accurate than PBC for dose calculations in treating deep-seated tumor beyond high-density heterogeneity interface.
Automatic mission planning algorithms for aerial collection of imaging-specific tasks
NASA Astrophysics Data System (ADS)
Sponagle, Paul; Salvaggio, Carl
2017-05-01
The rapid advancement and availability of small unmanned aircraft systems (sUAS) has led to many novel exploitation tasks utilizing that utilize this unique aerial imagery data. Collection of this unique data requires novel flight planning to accomplish the task at hand. This work describes novel flight planning to better support structure-from-motion missions to minimize occlusions, autonomous and periodic overflight of reflectance calibration panels to permit more efficient and accurate data collection under varying illumination conditions, and the collection of imagery data to study optical properties such as the bidirectional reflectance distribution function without disturbing the target in sensitive or remote areas of interest. These novel mission planning algorithms will provide scientists with additional tools to meet their future data collection needs.
Wang, Xiaogang; Zhao, Daomu
2012-05-21
A double-image encryption technique that based on an asymmetric algorithm is proposed. In this method, the encryption process is different from the decryption and the encrypting keys are also different from the decrypting keys. In the nonlinear encryption process, the images are encoded into an amplitude cyphertext, and two phase-only masks (POMs) generated based on phase truncation are kept as keys for decryption. By using the classical double random phase encoding (DRPE) system, the primary images can be collected by an intensity detector that located at the output plane. Three random POMs that applied in the asymmetric encryption can be safely applied as public keys. Simulation results are presented to demonstrate the validity and security of the proposed protocol.
Phase-unwrapping algorithm for images with high noise content based on a local histogram
NASA Astrophysics Data System (ADS)
Meneses, Jaime; Gharbi, Tijani; Humbert, Philippe
2005-03-01
We present a robust algorithm of phase unwrapping that was designed for use on phase images with high noise content. We proceed with the algorithm by first identifying regions with continuous phase values placed between fringe boundaries in an image and then phase shifting the regions with respect to one another by multiples of 2pi to unwrap the phase. Image pixels are segmented between interfringe and fringe boundary areas by use of a local histogram of a wrapped phase. The algorithm has been used successfully to unwrap phase images generated in a three-dimensional shape measurement for noninvasive quantification of human skin structure in dermatology, cosmetology, and plastic surgery.
Phase-unwrapping algorithm for images with high noise content based on a local histogram.
Meneses, Jaime; Gharbi, Tijani; Humbert, Philippe
2005-03-01
We present a robust algorithm of phase unwrapping that was designed for use on phase images with high noise content. We proceed with the algorithm by first identifying regions with continuous phase values placed between fringe boundaries in an image and then phase shifting the regions with respect to one another by multiples of 2pi to unwrap the phase. Image pixels are segmented between interfringe and fringe boundary areas by use of a local histogram of a wrapped phase. The algorithm has been used successfully to unwrap phase images generated in a three-dimensional shape measurement for noninvasive quantification of human skin structure in dermatology, cosmetology, and plastic surgery.
Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing
NASA Astrophysics Data System (ADS)
Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.
2007-05-01
uniquely powerful computing power of the MareNostrum supercomputer in Barcelona to realize the promise of RTM, incorporate it into daily processing flows, and to help solve exploration problems in a highly cost-effective way. Uniquely, the Kaleidoscope Project is simultaneously integrating software (algorithms) and hardware (Cell BE), steps that are traditionally taken sequentially. This unique integration of software and hardware will accelerate seismic imaging by several orders of magnitude compared to conventional solutions running on standard Linux Clusters.
A high-resolution algorithm for wave number estimation using holographic array processing
NASA Astrophysics Data System (ADS)
Roux, Philippe; Cassereau, Didier; Roux, André
2004-03-01
This paper presents an original way to perform wave number inversion from simulated data obtained in a noisy shallow-water environment. In the studied configuration an acoustic source is horizontally towed with respect to a vertical hydrophone array. The inversion is achieved from the combination of three ingredients. First, a modified version of the Prony algorithm is presented and numerical comparison is made to another high-resolution wave number inversion algorithm based on the matrix-pencil technique. Second, knowing that these high-resolution algorithms are classically sensitive to noise, the use of a holographic array processing enables improvement of the signal-to-noise ratio before the inversion is performed. Last, particular care is taken in the representations of the solutions in the wave number space to improve resolution without suffering from aliasing. The dependence of this wave number inversion algorithm on the relevant parameters of the problem is discussed.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
Adaptive Beamforming Algorithms for High Resolution Microwave Imaging
1991-04-01
Water pouring over the radomes covering some of the receivers could induce randot phase shifts to the waves passing through. Again, dynamic self... water tower, the power plant build- ing, and two connected storage silos. The microwave image of the white blocked area in the upper figure is shown...applying the MSA to the best set of three range bins; it reveals the two high stacks (A and B), the water tower (C), the two con- nected silos (D), and the
High specific energy and specific power aluminum/air battery for micro air vehicles
NASA Astrophysics Data System (ADS)
Kindler, A.; Matthies, L.
2014-06-01
Micro air vehicles developed under the Army's Micro Autonomous Systems and Technology program generally need a specific energy of 300 - 550 watt-hrs/kg and 300 -550 watts/kg to operate for about 1 hour. At present, no commercial cell can fulfill this need. The best available commercial technology is the Lithium-ion battery or its derivative, the Li- Polymer cell. This chemistry generally provides around 15 minutes flying time. One alternative to the State-of-the Art is the Al/air cell, a primary battery that is actually half fuel cell. It has a high energy battery like aluminum anode, and fuel cell like air electrode that can extract oxygen out of the ambient air rather than carrying it. Both of these features tend to contribute to a high specific energy (watt-hrs/kg). High specific power (watts/kg) is supported by high concentration KOH electrolyte, a high quality commercial air electrode, and forced air convection from the vehicles rotors. The performance of this cell with these attributes is projected to be 500 watt-hrs/kg and 500 watts/kg based on simple model. It is expected to support a flying time of approximately 1 hour in any vehicle in which the usual limit is 15 minutes.
2015-01-01
Background Epigenetic modifications are essential for controlling gene expression. Recent studies have shown that not only single epigenetic modifications but also combinations of multiple epigenetic modifications play vital roles in gene regulation. A striking example is the long hypomethylated regions enriched with modified H3K27me3 (called, "K27HMD" regions), which are exposed to suppress the expression of key developmental genes relevant to cellular development and differentiation during embryonic stages in vertebrates. It is thus a biologically important issue to develop an effective optimization algorithm for detecting long DNA regions (e.g., >4 kbp in size) that harbor a specific combination of epigenetic modifications (e.g., K27HMD regions). However, to date, optimization algorithms for these purposes have received little attention, and available methods are still heuristic and ad hoc. Results In this paper, we propose a linear time algorithm for calculating a set of non-overlapping regions that maximizes the sum of similarities between the vector of focal epigenetic states and the vectors of raw epigenetic states at DNA positions in the set of regions. The average elapsed time to process the epigenetic data of any of human chromosomes was less than 2 seconds on an Intel Xeon CPU. To demonstrate the effectiveness of the algorithm, we estimated large K27HMD regions in the medaka and human genomes using our method, ChromHMM, and a heuristic method. Conclusions We confirmed that the advantages of our method over those of the two other methods. Our method is flexible enough to handle other types of epigenetic combinations. The program that implements the method is called "CSMinfinder" and is made available at: http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/Segmentation/ PMID:25708947
Noncovalent functionalization of carbon nanotubes for highly specific electronic biosensors
NASA Astrophysics Data System (ADS)
Chen, Robert J.; Bangsaruntip, Sarunya; Drouvalakis, Katerina A.; Wong Shi Kam, Nadine; Shim, Moonsub; Li, Yiming; Kim, Woong; Utz, Paul J.; Dai, Hongjie
2003-04-01
Novel nanomaterials for bioassay applications represent a rapidly progressing field of nanotechnology and nanobiotechnology. Here, we present an exploration of single-walled carbon nanotubes as a platform for investigating surface-protein and protein-protein binding and developing highly specific electronic biomolecule detectors. Nonspecific binding on nanotubes, a phenomenon found with a wide range of proteins, is overcome by immobilization of polyethylene oxide chains. A general approach is then advanced to enable the selective recognition and binding of target proteins by conjugation of their specific receptors to polyethylene oxide-functionalized nanotubes. This scheme, combined with the sensitivity of nanotube electronic devices, enables highly specific electronic sensors for detecting clinically important biomolecules such as antibodies associated with human autoimmune diseases.
NASA Astrophysics Data System (ADS)
Li, Hao; He, Xianqiang; Bai, Yan; Chen, Xiaoyan; Gong, Fang; Zhu, Qiankun; Hu, Zifeng
2016-10-01
Numerous empirical algorithms have been operationally used to retrieve the global ocean chlorophyll-a concentration (Chla) from ocean color satellite data, e.g., the OC4V4 algorithm for SeaWiFS and OC3M for MODIS. However, the algorithms have been established and validated based on the in situ data mainly measured under low to moderate solar zenith angle (<70°). Currently, with the development of the geostationary satellite ocean color remote sensing which observes from early morning to later afternoon, it is necessary to know whether the empirical Chla algorithms could be applied to high solar zenith angle. In this study, the performances of seven widely-used Chla algorithms under high solar zenith angles, i.e., OC2, OC3M, OC3V, OC4V4, CLARK, OCI, and YOC algorithms, were evaluated using the NOMAD global in situ ocean color dataset. The results showed that the performances of all the seven algorithms decreased significantly under high solar zenith angles as compared to those under low-moderate solar zenith angles. For instance, for the OC4V4 algorithm, the relative percent difference (RPD) and root-mean-square error (RMSE) were 13.78% and 1.66 μg/l for the whole dataset, and 3.95% and 1.49 μg/l for the solar zenith angles ranged from 30° to 40°, respectively. However, the RPD and RMSE increased to 30.45% and 6.10μg/l for solar zenith angle larger than 70°.
Experiences with the hydraulic design of the high specific speed Francis turbine
NASA Astrophysics Data System (ADS)
Obrovsky, J.; Zouhar, J.
2014-03-01
The high specific speed Francis turbine is still suitable alternative for refurbishment of older hydro power plants with lower heads and worse cavitation conditions. In the paper the design process of such kind of turbine together with the results comparison of homological model tests performed in hydraulic laboratory of ČKD Blansko Engineering is introduced. The turbine runner was designed using the optimization algorithm and considering the high specific speed hydraulic profile. It means that hydraulic profiles of the spiral case, the distributor and the draft tube were used from a Kaplan turbine. The optimization was done as the automatic cycle and was based on a simplex optimization method as well as on a genetic algorithm. The number of blades is shown as the parameter which changes the resulting specific speed of the turbine between ns=425 to 455 together with the cavitation characteristics. Minimizing of cavitation on the blade surface as well as on the inlet edge of the runner blade was taken into account during the design process. The results of CFD analyses as well as the model tests are mentioned in the paper.
A Ratio Test of Interrater Agreement with High Specificity
ERIC Educational Resources Information Center
Cousineau, Denis; Laurencelle, Louis
2015-01-01
Existing tests of interrater agreements have high statistical power; however, they lack specificity. If the ratings of the two raters do not show agreement but are not random, the current tests, some of which are based on Cohen's kappa, will often reject the null hypothesis, leading to the wrong conclusion that agreement is present. A new test of…
The evolutionary development of high specific impulse electric thruster technology
NASA Technical Reports Server (NTRS)
Sovey, James S.; Hamley, John A.; Patterson, Michael J.; Rawlin, Vincent K.; Myers, Roger M.
1992-01-01
Electric propulsion flight and technology demonstrations conducted primarily by Europe, Japan, China, the U.S., and the USSR are reviewed. Evolutionary mission applications for high specific impulse electric thruster systems are discussed, and the status of arcjet, ion, and magnetoplasmadynamic thrusters and associated power processor technologies are summarized.
Lee, Ye Rin; Kim, Young Ae; Park, So Youn; Oh, Chang Mo; Kim, Young Eun; Oh, In Hwan
2016-11-01
Years of life lost (YLLs) are estimated based on mortality and cause of death (CoD); therefore, it is necessary to accurately calculate CoD to estimate the burden of disease. The garbage code algorithm was developed by the Global Burden of Disease (GBD) Study to redistribute inaccurate CoD and enhance the validity of CoD estimation. This study aimed to estimate cause-specific mortality rates and YLLs in Korea by applying a modified garbage code algorithm. CoD data for 2010-2012 were used to calculate the number of deaths. The garbage code algorithm was then applied to calculate target cause (i.e., valid CoD) and adjusted CoD using the garbage code redistribution. The results showed that garbage code deaths accounted for approximately 25% of all CoD during 2010-2012. In 2012, lung cancer contributed the most to cause-specific death according to the Statistics Korea. However, when CoD was adjusted using the garbage code redistribution, ischemic heart disease was the most common CoD. Furthermore, before garbage code redistribution, self-harm contributed the most YLLs followed by lung cancer and liver cancer; however, after application of the garbage code redistribution, though self-harm was the most common leading cause of YLL, it is followed by ischemic heart disease and lung cancer. Our results showed that garbage code deaths accounted for a substantial amount of mortality and YLLs. The results may enhance our knowledge of burden of disease and help prioritize intervention settings by changing the relative importance of burden of disease.
2016-01-01
Years of life lost (YLLs) are estimated based on mortality and cause of death (CoD); therefore, it is necessary to accurately calculate CoD to estimate the burden of disease. The garbage code algorithm was developed by the Global Burden of Disease (GBD) Study to redistribute inaccurate CoD and enhance the validity of CoD estimation. This study aimed to estimate cause-specific mortality rates and YLLs in Korea by applying a modified garbage code algorithm. CoD data for 2010–2012 were used to calculate the number of deaths. The garbage code algorithm was then applied to calculate target cause (i.e., valid CoD) and adjusted CoD using the garbage code redistribution. The results showed that garbage code deaths accounted for approximately 25% of all CoD during 2010–2012. In 2012, lung cancer contributed the most to cause-specific death according to the Statistics Korea. However, when CoD was adjusted using the garbage code redistribution, ischemic heart disease was the most common CoD. Furthermore, before garbage code redistribution, self-harm contributed the most YLLs followed by lung cancer and liver cancer; however, after application of the garbage code redistribution, though self-harm was the most common leading cause of YLL, it is followed by ischemic heart disease and lung cancer. Our results showed that garbage code deaths accounted for a substantial amount of mortality and YLLs. The results may enhance our knowledge of burden of disease and help prioritize intervention settings by changing the relative importance of burden of disease. PMID:27775249
HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization
Kannan, Ramakrishnan; Sukumar, Sreenivas R.; Ballard, Grey M.; Park, Haesun
2016-08-22
NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems for $\\WW$ and $\\HH$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $\\WW$ and $\\HH$ within the alternating iterations.
Zhang Changjiang; Wang Xiaodong
2008-11-06
An efficient typhoon cloud image restoration algorithm is proposed. Having implemented contourlet transform to a typhoon cloud image, noise is reduced in the high sub-bands. Weight median value filter is used to reduce the noise in the contourlet domain. Inverse contourlet transform is done to obtain the de-noising image. In order to enhance the global contrast of the typhoon cloud image, in-complete Beta transform (IBT) is used to determine non-linear gray transform curve so as to enhance global contrast for the de-noising typhoon cloud image. Genetic algorithm is used to obtain the optimal gray transform curve. Information entropy is used as the fitness function of the genetic algorithm. Experimental results show that the new algorithm is able to well enhance the global for the typhoon cloud image while well reducing the noises in the typhoon cloud image.
A High-Performance Neural Prosthesis Enabled by Control Algorithm Design
Gilja, Vikash; Nuyujukian, Paul; Chestek, Cindy A.; Cunningham, John P.; Yu, Byron M.; Fan, Joline M.; Churchland, Mark M.; Kaufman, Matthew T.; Kao, Jonathan C.; Ryu, Stephen I.; Shenoy, Krishna V.
2012-01-01
Neural prostheses translate neural activity from the brain into control signals for guiding prosthetic devices, such as computer cursors and robotic limbs, and thus offer disabled patients greater interaction with the world. However, relatively low performance remains a critical barrier to successful clinical translation; current neural prostheses are considerably slower with less accurate control than the native arm. Here we present a new control algorithm, the recalibrated feedback intention-trained Kalman filter (ReFIT-KF), that incorporates assumptions about the nature of closed loop neural prosthetic control. When tested with rhesus monkeys implanted with motor cortical electrode arrays, the ReFIT-KF algorithm outperforms existing neural prostheses in all measured domains and halves acquisition time. This control algorithm permits sustained uninterrupted use for hours and generalizes to more challenging tasks without retraining. Using this algorithm, we demonstrate repeatable high performance for years after implantation across two monkeys, thereby increasing the clinical viability of neural prostheses. PMID:23160043
The high performance parallel algorithm for Unified Gas-Kinetic Scheme
NASA Astrophysics Data System (ADS)
Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu
2016-11-01
A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.
Comparison between summing-up algorithms to determine areas of small peaks on high baselines
NASA Astrophysics Data System (ADS)
Shi, Quanlin; Zhang, Jiamei; Chang, Yongfu; Qian, Shaojun
2005-12-01
It is found that the minimum detectable activity (MDA) has a same tendency as the relative standard deviation (RSD) and a particular application is characteristic of the ratio of the peak area to the baseline height. Different applications need different algorithms to reduce the RSD of peak areas or the MDA of potential peaks. A model of Gaussian peaks superposed on linear baselines is established to simulate the multichannel spectrum and summing-up algorithms such as total peak area (TPA), and Covell and Sterlinski are compared to find the most appropriate algorithm for different applications. The results show that optimal Covell and Sterlinski algorithms will yield MDA or RSD half lower than TPA when the areas of small peaks on high baselines are to be determined. The conclusion is proved by experiment.
Chuan, He; Dishan, Qiu; Jin, Liu
2012-01-01
The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522
Chuan, He; Dishan, Qiu; Jin, Liu
2012-01-01
The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible.
High speed multiplier using Nikhilam Sutra algorithm of Vedic mathematics
NASA Astrophysics Data System (ADS)
Pradhan, Manoranjan; Panda, Rutuparna
2014-03-01
This article presents the design of a new high-speed multiplier architecture using Nikhilam Sutra of Vedic mathematics. The proposed multiplier architecture finds out the compliment of the large operand from its nearest base to perform the multiplication. The multiplication of two large operands is reduced to the multiplication of their compliments and addition. It is more efficient when the magnitudes of both operands are more than half of their maximum values. The carry save adder in the multiplier architecture increases the speed of addition of partial products. The multiplier circuit is synthesised and simulated using Xilinx ISE 10.1 software and implemented on Spartan 2 FPGA device XC2S30-5pq208. The output parameters such as propagation delay and device utilisation are calculated from synthesis results. The performance evaluation results in terms of speed and device utilisation are compared with earlier multiplier architecture. The proposed design has speed improvements compared to multiplier architecture presented in the literature.
Highly specific protein-protein interactions, evolution and negative design.
Sear, Richard P
2004-12-01
We consider highly specific protein-protein interactions in proteomes of simple model proteins. We are inspired by the work of Zarrinpar et al (2003 Nature 426 676). They took a binding domain in a signalling pathway in yeast and replaced it with domains of the same class but from different organisms. They found that the probability of a protein binding to a protein from the proteome of a different organism is rather high, around one half. We calculate the probability of a model protein from one proteome binding to the protein of a different proteome. These proteomes are obtained by sampling the space of functional proteomes uniformly. In agreement with Zarrinpar et al we find that the probability of a protein binding a protein from another proteome is rather high, of order one tenth. Our results, together with those of Zarrinpar et al, suggest that designing, say, a peptide to block or reconstitute a single signalling pathway, without affecting any other pathways, requires knowledge of all the partners of the class of binding domains the peptide is designed to mimic. This knowledge is required to use negative design to explicitly design out interactions of the peptide with proteins other than its target. We also found that patches that are required to bind with high specificity evolve more slowly than those that are required only to not bind to any other patch. This is consistent with some analysis of sequence data for proteins engaged in highly specific interactions.
Generalized computer algorithms for enthalpy, entropy and specific heat of superheated vapors
NASA Astrophysics Data System (ADS)
Cowden, Michael W.; Scaringe, Robert P.; Gebre-Amlak, Yonas D.
This paper presents an innovative technique for the development of enthalpy, entropy, and specific heat correlations in the superheated vapor region. The method results in a prediction error of less than 5 percent and requires the storage of 39 constants for each fluid. These correlations are obtained by using the Beattie-Bridgeman equation of state and a least-squares regression for the coefficients involved.
Power spectral density specifications for high-power laser systems
Lawson, J.K.; Aikens, D.A.; English, R.E. Jr.; Wolfe, C.R.
1996-04-22
This paper describes the use of Fourier techniques to characterize the transmitted and reflected wavefront of optical components. Specifically, a power spectral density, (PSD), approach is used. High power solid-state lasers exhibit non-linear amplification of specific spatial frequencies. Thus, specifications that limit the amplitude of these spatial frequencies are necessary in the design of these systems. Further, NIF optical components have square, rectangular or irregularly shaped apertures with major dimensions up-to 800 mm. Components with non-circular apertures can not be analyzed correctly with Zernicke polynomials since these functions are an orthogonal set for circular apertures only. A more complete and powerful representation of the optical wavefront can be obtained by Fourier analysis in 1 or 2 dimensions. The PSD is obtained from the amplitude of frequency components present in the Fourier spectrum. The shape of a resultant wavefront or the focal spot of a complex multicomponent laser system can be calculated and optimized using PSDs of the individual optical components which comprise the system. Surface roughness can be calculated over a range of spatial scale-lengths by integrating the PSD. Finally, since the optical transfer function (OTF) of the instruments used to measure the wavefront degrades at high spatial frequencies, the PSD of an optical component is underestimated. We can correct for this error by modifying the PSD function to restore high spatial frequency information. The strengths of PSD analysis are leading us to develop optical specifications incorporating this function for the planned National Ignition Facility (NIF).
Rapid Generation of Highly Specific Aptamers via Micromagnetic Selection
Qian, Jiangrong; Lou, Xinhui; Zhang, Yanting; Xiao, Yi; Soh, H. Tom
2009-01-01
Aptamers are nucleic acid-based reagents that bind to target molecules with high affinity and specificity. However, methods for generating aptamers from random combinatorial libraries (e.g., SELEX) are often labor-intensive and time-consuming. Recent studies suggest that microfluidic SELEX (M-SELEX) technology can accelerate aptamer isolation by enabling highly stringent selection conditions through the use of very small amounts of target molecules. We present here an alternative M-SELEX method, which employs a disposable microfluidic chip to rapidly generate aptamers with high affinity and specificity. The Micro-Magnetic Separation (MMS) chip integrates microfabricated ferromagnetic structures to reproducibly generate large magnetic field gradients within its microchannel that efficiently trap magnetic bead-bound aptamers. Operation of the MMS device is facile, robust and demonstrates high recovery of the beads (99.5%), such that picomolar amounts of target molecule can be used. Importantly, the device demonstrates exceptional separation efficiency in removing weakly-bound and unbound ssDNA to rapidly enrich target-specific aptamers. As a model, we demonstrate here the generation of DNA aptamers against streptavidin in three rounds of positive selection. We further enhanced the specificity of the selected aptamers via a round of negative selection in the same device against bovine serum albumin (BSA). The resulting aptamers displayed dissociation constants ranging from 25 to 65 nM for streptavidin but negligible affinity for BSA. Since a wide spectrum of molecular targets can be readily conjugated on magnetic beads, MMS-based SELEX should provide a general platform for rapid generation of specific aptamers. PMID:19480397
An end-to-end workflow for engineering of biological networks from high-level specifications.
Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun
2012-08-17
We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.
Overton, Terry; Fielding, Cheryl; de Alba, Roman Garcia
2008-07-01
This study compared Autism diagnostic observation schedule (ADOS) algorithm scores of a sample of 26 children who were administered modules 1-3 of the ADOS with the scores obtained applying the revised ADOS algorithm proposed by Gotham et al. (2007). Results of this application were inconsistent, yielding slightly more accurate results for module 1. New algorithm scores on modules 2 and 3 remained consistent with the original algorithm scores. The Mann-Whitney U was applied to compare revised algorithm and clinical levels of social impairment to determine if significant differences were evident. Results of Mann-Whitney U analyses were inconsistent and demonstrated less specificity for children with milder levels of social impairment. The revised algorithm demonstrated accuracy for the more severe autistic group.
High specificity in plant leaf metabolic responses to arbuscular mycorrhiza.
Schweiger, Rabea; Baier, Markus C; Persicke, Marcus; Müller, Caroline
2014-05-22
The chemical composition of plants (phytometabolome) is dynamic and modified by environmental factors. Understanding its modulation allows to improve crop quality and decode mechanisms underlying plant-pest interactions. Many studies that investigate metabolic responses to the environment focus on single model species and/or few target metabolites. However, comparative studies using environmental metabolomics are needed to evaluate commonalities of chemical responses to certain challenges. We assessed the specificity of foliar metabolic responses of five plant species to the widespread, ancient symbiosis with a generalist arbuscular mycorrhizal fungus. Here we show that plant species share a large 'core metabolome' but nevertheless the phytometabolomes are modulated highly species/taxon-specifically. Such a low conservation of responses across species highlights the importance to consider plant metabolic prerequisites and the long time of specific plant-fungus coevolution. Thus, the transferability of findings regarding phytometabolome modulation by an identical AM symbiont is severely limited even between closely related species.
A class-based scheduling algorithm with high throughput for optical burst switching networks
NASA Astrophysics Data System (ADS)
Wu, Guiling; Chen, Jianping; Li, Xinwan; Wang, Hui
2005-02-01
Optical burst switching (OBS) is more efficient and feasible solution to build terabit IP-over-WDM optical network by employing relatively matured photonic and opto-electronic devices and combining the advantage of high bandwidth of optical transmission/switching and high flexibility of electronic control/processing. Channel scheduling algorithm is one of the key issues related to OBS networks. In this paper, a class-based scheduling algorithm is presented with emphasis on fairly utilizing the bandwidth among different services. A maximum reserved channel numbers and a maximum channel search times is introduced for each service based on its class of services, load and available bandwidth resource in the class-based scheduling algorithm. The performance of the scheduling algorithm is studied in detail by simulation. The results show that the scheduling algorithm can allocate the bandwidth more fairly among different services and the total burst loss ratio under high throughput can be lowered with acceptable expense on delay performance of services with lower delay requirement. Problems related with burst loss ratio and the delay requirement of different services can be well solved simultaneously.
Visual saliency-based fast intracoding algorithm for high efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin
2017-01-01
Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.
Imran, Muhammad; Zafar, Nazir Ahmad
2012-01-01
Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Accelerator Production and Separations for High Specific Activity Rhenium-186
Jurisson, Silvia S.; Wilbur, D. Scott
2016-04-01
Tungsten and osmium targets were evaluated for the production of high specific activity rhenium-186. Rhenium-186 has potential applications in radiotherapy for the treatment of a variety of diseases, including targeting with monoclonal antibodies and peptides. Methods were evaluated using tungsten metal, tungsten dioxide, tungsten disulfide and osmium disulfide. Separation of the rhenium-186 produced and recycling of the enriched tungsten-186 and osmium-189 enriched targets were developed.
Method of preparing high specific activity platinum-195m
Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.
2004-06-15
A method of preparing high-specific-activity .sup.195m Pt includes the steps of: exposing .sup.193 Ir to a flux of neutrons sufficient to convert a portion of the .sup.193 Ir to .sup.195m Pt to form an irradiated material; dissolving the irradiated material to form an intermediate solution comprising Ir and Pt; and separating the Pt from the Ir by cation exchange chromatography to produce .sup.195m Pt.
Method for preparing high specific activity 177Lu
Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.
2004-04-06
A method of separating lutetium from a solution containing Lu and Yb, particularly reactor-produced .sup.177 Lu and .sup.177 Yb, includes the steps of: providing a chromatographic separation apparatus containing LN resin; loading the apparatus with a solution containing Lu and Yb; and eluting the apparatus to chromatographically separate the Lu and the Yb in order to produce high-specific-activity .sup.177 Yb.
Solar-powered rocket engine optimization for high specific impulse
NASA Astrophysics Data System (ADS)
Pande, J. Bradley
1993-11-01
Hercules Aerospace is currently developing a solar-powered rocket engine (SPRE) design optimized for high specific impulse (Isp). The SPRE features a low loss geometry in its light-gathering cavity, which includes an integral secondary concentrator. The simple one-piece heat exchanger is made from refractory metal and/or ceramic open-celled foam. The foam's high surface-area-to-volume ratio will efficiently transfer the thermal energy to the hydrogen propellant. The single-pass flow of propellant through the heat exchanger further boosts thermal efficiency by regeneratively cooling surfaces near the entrance of the optical cavity. These surfaces would otherwise reradiate a significant portion of the captured solar energy back out of the solar entrance. Such design elements promote a high overall thermal efficiency and hence, a high operating Isp
A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas
Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q
2007-04-18
A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.
A surgeon specific automatic path planning algorithm for deep brain stimulation
NASA Astrophysics Data System (ADS)
Liu, Yuan; Dawant, Benoit M.; Pallavaram, Srivatsan; Neimat, Joseph S.; Konrad, Peter E.; D'Haese, Pierre-Francois; Datteri, Ryan D.; Landman, Bennett A.; Noble, Jack H.
2012-02-01
In deep brain stimulation surgeries, stimulating electrodes are placed at specific targets in the deep brain to treat neurological disorders. Reaching these targets safely requires avoiding critical structures in the brain. Meticulous planning is required to find a safe path from the cortical surface to the intended target. Choosing a trajectory automatically is difficult because there is little consensus among neurosurgeons on what is optimal. Our goals are to design a path planning system that is able to learn the preferences of individual surgeons and, eventually, to standardize the surgical approach using this learned information. In this work, we take the first step towards these goals, which is to develop a trajectory planning approach that is able to effectively mimic individual surgeons and is designed such that parameters, which potentially can be automatically learned, are used to describe an individual surgeon's preferences. To validate the approach, two neurosurgeons were asked to choose between their manual and a computed trajectory, blinded to their identity. The results of this experiment showed that the neurosurgeons preferred the computed trajectory over their own in 10 out of 40 cases. The computed trajectory was judged to be equivalent to the manual one or otherwise acceptable in 27 of the remaining cases. These results demonstrate the potential clinical utility of computer-assisted path planning.
Schedl, Markus
2017-01-01
Recently, the LFM-1b dataset has been proposed to foster research and evaluation in music retrieval and music recommender systems, Schedl (Proceedings of the ACM International Conference on Multimedia Retrieval (ICMR). New York, 2016). It contains more than one billion music listening events created by more than 120,000 users of Last.fm. Each listening event is characterized by artist, album, and track name, and further includes a timestamp. Basic demographic information and a selection of more elaborate listener-specific descriptors are included as well, for anonymized users. In this article, we reveal information about LFM-1b's acquisition and content and we compare it to existing datasets. We furthermore provide an extensive statistical analysis of the dataset, including basic properties of the item sets, demographic coverage, distribution of listening events (e.g., over artists and users), and aspects related to music preference and consumption behavior (e.g., temporal features and mainstreaminess of listeners). Exploiting country information of users and genre tags of artists, we also create taste profiles for populations and determine similar and dissimilar countries in terms of their populations' music preferences. Finally, we illustrate the dataset's usage in a simple artist recommendation task, whose results are intended to serve as baseline against which more elaborate techniques can be assessed.
Dara, Antoine; Drábek, Elliott F; Travassos, Mark A; Moser, Kara A; Delcher, Arthur L; Su, Qi; Hostelley, Timothy; Coulibaly, Drissa; Daou, Modibo; Dembele, Ahmadou; Diarra, Issa; Kone, Abdoulaye K; Kouriba, Bourema; Laurens, Matthew B; Niangaly, Amadou; Traore, Karim; Tolo, Youssouf; Fraser, Claire M; Thera, Mahamadou A; Djimde, Abdoulaye A; Doumbo, Ogobara K; Plowe, Christopher V; Silva, Joana C
2017-03-28
Encoded by the var gene family, highly variable Plasmodium falciparum erythrocyte membrane protein-1 (PfEMP1) proteins mediate tissue-specific cytoadherence of infected erythrocytes, resulting in immune evasion and severe malaria disease. Sequencing and assembling the 40-60 var gene complement for individual infections has been notoriously difficult, impeding molecular epidemiological studies and the assessment of particular var elements as subunit vaccine candidates. We developed and validated a novel algorithm, Exon-Targeted Hybrid Assembly (ETHA), to perform targeted assembly of var gene sequences, based on a combination of Pacific Biosciences and Illumina data. Using ETHA, we characterized the repertoire of var genes in 12 samples from uncomplicated malaria infections in children from a single Malian village and showed them to be as genetically diverse as vars from isolates from around the globe. The gene var2csa, a member of the var family associated with placental malaria pathogenesis, was present in each genome, as were vars previously associated with severe malaria. ETHA, a tool to discover novel var sequences from clinical samples, will aid the understanding of malaria pathogenesis and inform the design of malaria vaccines based on PfEMP1. ETHA is available at: https://sourceforge.net/projects/etha/ .
EPCA-2: a highly specific serum marker for prostate cancer.
Leman, Eddy S; Cannon, Grant W; Trock, Bruce J; Sokoll, Lori J; Chan, Daniel W; Mangold, Leslie; Partin, Alan W; Getzenberg, Robert H
2007-04-01
To describe the initial assessment of early prostate cancer antigen (EPCA)-2 as a serum marker for the detection of prostate cancer and to examine its sensitivity and specificity. Serum samples were obtained from 385 men: those with prostate-specific antigen (PSA) levels less than 2.5 ng/mL, PSA levels of 2.5 ng/mL or greater with negative biopsy findings, benign prostatic hyperplasia, organ-confined prostate cancer, non-organ-confined disease, and prostate cancer with PSA levels less than 2.5 ng/mL. In addition, a diverse group of controls was assessed with an enzyme-linked immunosorbent assay to detect an epitope of the EPCA-2 protein, EPCA-2.22. Using a cutoff of 30 ng/mL, the EPCA-2.22 assay had a 92% specificity (95% confidence interval 85% to 96%) for healthy men and men with benign prostatic hyperplasia and 94% sensitivity (95% confidence interval [CI] 93% to 99%) for overall prostate cancer. The specificity for PSA in these selected groups of patients was 65% (95% CI 55% to 75%). Additionally, EPCA-2.22 was highly accurate in differentiating between localized and extracapsular disease (area under the curve 0.89, 95% CI 0.82 to 0.97, P <0.0001) in contrast to PSA (area under the curve 0.62, 95% CI 0.50 to 0.75, P = 0.05). The results of our study have shown that EPCA-2 is a novel biomarker associated with prostate cancer that has high sensitivity and specificity and accurately differentiates between men with organ-confined and non-organ-confined disease.
High Order Accurate Algorithms for Shocks, Rapidly Changing Solutions and Multiscale Problems
2014-11-13
positivity-preserving property for such high speed flows. As an application to traffic flow modeling and simulations, we study a predictive continuum...related numerical methods, which are high order accurate numerical methods for solving problems with shocks and other complicated solution structures...New algorithm aspects include subcell resolution for non-conservative systems, high order well balanced schemes, stable Lagrangian schemes, schemes
A fast and high performance multiple data integration algorithm for identifying human disease genes
2015-01-01
Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620
A new algorithm for generating highly accurate benchmark solutions to transport test problems
Azmy, Y.Y.
1997-06-01
We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.
NASA Astrophysics Data System (ADS)
Hebbar, Ullhas; Paul, Anup; Banerjee, Rupak
2016-11-01
Image based modeling is finding increasing relevance in assisting diagnosis of Pulmonary Valve-Vasculature Dysfunction (PVD) in congenital heart disease patients. This research presents compliant artery - blood interaction in a patient specific Pulmonary Artery (PA) model. This is an improvement over our previous numerical studies which assumed rigid walled arteries. The impedance of the arteries and the energy transfer from the Right Ventricle (RV) to PA is governed by compliance, which in turn is influenced by the level of pre-stress in the arteries. In order to evaluate the pre-stress, an inverse algorithm was developed using an in-house script written in MATLAB and Python, and implemented using the Finite Element Method (FEM). This analysis used a patient specific material model developed by our group, in conjunction with measured pressure (invasive) and velocity (non-invasive) values. The analysis was performed on an FEM solver, and preliminary results indicated that the Main PA (MPA) exhibited higher compliance as well as increased hysteresis over the cardiac cycle when compared with the Left PA (LPA). The computed compliance values for the MPA and LPA were 14% and 34% lesser than the corresponding measured values. Further, the computed pressure drop and flow waveforms were in close agreement with the measured values. In conclusion, compliant artery - blood interaction models of patient specific geometries can play an important role in hemodynamics based diagnosis of PVD.
Pearson, John V.; Homer, Nils; Lowey, James; Suh, Edward; Craig, David W.
2009-01-01
Abstract As a first step in analyzing high-throughput data in genome-wide studies, several algorithms are available to identify and prioritize candidates lists for downstream fine-mapping. The prioritized candidates could be differentially expressed genes, aberrations in comparative genomics hybridization studies, or single nucleotide polymorphisms (SNPs) in association studies. Different analysis algorithms are subject to various experimental artifacts and analytical features that lead to different candidate lists. However, little research has been carried out to theoretically quantify the consensus between different candidate lists and to compare the study specific accuracy of the analytical methods based on a known reference candidate list. Within the context of genome-wide studies, we propose a generic mathematical framework to statistically compare ranked lists of candidates from different algorithms with each other or, if available, with a reference candidate list. To cope with the growing need for intuitive visualization of high-throughput data in genome-wide studies, we describe a complementary customizable visualization tool. As a case study, we demonstrate application of our framework to the comparison and visualization of candidate lists generated in a DNA-pooling based genome-wide association study of CEPH data in the HapMap project, where prior knowledge from individual genotyping can be used to generate a true reference candidate list. The results provide a theoretical basis to compare the accuracy of various methods and to identify redundant methods, thus providing guidance for selecting the most suitable analysis method in genome-wide studies. PMID:19361328
Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School
ERIC Educational Resources Information Center
Avancena, Aimee Theresa; Nishihara, Akinori
2014-01-01
Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…
Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School
ERIC Educational Resources Information Center
Avancena, Aimee Theresa; Nishihara, Akinori
2014-01-01
Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…
MS Amanda, a Universal Identification Algorithm Optimized for High Accuracy Tandem Mass Spectra
2014-01-01
Today’s highly accurate spectra provided by modern tandem mass spectrometers offer considerable advantages for the analysis of proteomic samples of increased complexity. Among other factors, the quantity of reliably identified peptides is considerably influenced by the peptide identification algorithm. While most widely used search engines were developed when high-resolution mass spectrometry data were not readily available for fragment ion masses, we have designed a scoring algorithm particularly suitable for high mass accuracy. Our algorithm, MS Amanda, is generally applicable to HCD, ETD, and CID fragmentation type data. The algorithm confidently explains more spectra at the same false discovery rate than Mascot or SEQUEST on examined high mass accuracy data sets, with excellent overlap and identical peptide sequence identification for most spectra also explained by Mascot or SEQUEST. MS Amanda, available at http://ms.imp.ac.at/?goto=msamanda, is provided free of charge both as standalone version for integration into custom workflows and as a plugin for the Proteome Discoverer platform. PMID:24909410
A GPU based high-definition ultrasound digital scan conversion algorithm
NASA Astrophysics Data System (ADS)
Zhao, Mingchang; Mo, Shanjue
2010-02-01
Digital scan conversion algorithm is the most computational intensive part of B-mode ultrasound imaging. Traditionally, in order to meet the requirements of real-time imaging, digital scan conversion algorithm often traded off image quality for speed, such as the use of simple image interpolation algorithm, the use of look-up table to carry out polar coordinates transform and logarithmic compression. This paper presents a GPU-based high-definition real-time ultrasound digital scan conversion algorithm implementation. By rendering appropriate proxy geometry, we can implement a high precision digital scan conversion pipeline, including polar coordinates transform, bi-cubic image interpolation, high dynamic range tone reduction, line average and frame persistence FIR filtering, 2D post filtering, fully in the fragment shader of GPU at real-time speed. The proposed method shows the possibility of updating exist FPGA or ASIC based digital scan conversion implementation to low cost GPU based high-definition digital scan conversion implementation.
NASA Astrophysics Data System (ADS)
Lee, Sangkyu
Illicit trafficking and smuggling of radioactive materials and special nuclear materials (SNM) are considered as one of the most important recent global nuclear threats. Monitoring the transport and safety of radioisotopes and SNM are challenging due to their weak signals and easy shielding. Great efforts worldwide are focused at developing and improving the detection technologies and algorithms, for accurate and reliable detection of radioisotopes of interest in thus better securing the borders against nuclear threats. In general, radiation portal monitors enable detection of gamma and neutron emitting radioisotopes. Passive or active interrogation techniques, present and/or under the development, are all aimed at increasing accuracy, reliability, and in shortening the time of interrogation as well as the cost of the equipment. Equally important efforts are aimed at advancing algorithms to process the imaging data in an efficient manner providing reliable "readings" of the interiors of the examined volumes of various sizes, ranging from cargos to suitcases. The main objective of this thesis is to develop two synergistic algorithms with the goal to provide highly reliable - low noise identification of radioisotope signatures. These algorithms combine analysis of passive radioactive detection technique with active interrogation imaging techniques such as gamma radiography or muon tomography. One algorithm consists of gamma spectroscopy and cosmic muon tomography, and the other algorithm is based on gamma spectroscopy and gamma radiography. The purpose of fusing two detection methodologies per algorithm is to find both heavy-Z radioisotopes and shielding materials, since radionuclides can be identified with gamma spectroscopy, and shielding materials can be detected using muon tomography or gamma radiography. These combined algorithms are created and analyzed based on numerically generated images of various cargo sizes and materials. In summary, the three detection
High efficiency cell-specific targeting of cytokine activity
NASA Astrophysics Data System (ADS)
Garcin, Geneviève; Paul, Franciane; Staufenbiel, Markus; Bordat, Yann; van der Heyden, José; Wilmes, Stephan; Cartron, Guillaume; Apparailly, Florence; de Koker, Stefaan; Piehler, Jacob; Tavernier, Jan; Uzé, Gilles
2014-01-01
Systemic toxicity currently prevents exploiting the huge potential of many cytokines for medical applications. Here we present a novel strategy to engineer immunocytokines with very high targeting efficacies. The method lies in the use of mutants of toxic cytokines that markedly reduce their receptor-binding affinities, and that are thus rendered essentially inactive. Upon fusion to nanobodies specifically binding to marker proteins, activity of these cytokines is selectively restored for cell populations expressing this marker. This ‘activity-by-targeting’ concept was validated for type I interferons and leptin. In the case of interferon, activity can be directed to target cells in vitro and to selected cell populations in mice, with up to 1,000-fold increased specific activity. This targeting strategy holds promise to revitalize the clinical potential of many cytokines.
Cellulose antibody films for highly specific evanescent wave immunosensors
NASA Astrophysics Data System (ADS)
Hartmann, Andreas; Bock, Daniel; Jaworek, Thomas; Kaul, Sepp; Schulze, Matthais; Tebbe, H.; Wegner, Gerhard; Seeger, Stefan
1996-01-01
For the production of recognition elements for evanescent wave immunosensors optical waveguides have to be coated with ultrathin stable antibody films. In the present work non amphiphilic alkylated cellulose and copolyglutamate films are tested as monolayer matrices for the antibody immobilization using the Langmuir-Blodgett technique. These films are transferred onto optical waveguides and serve as excellent matrices for the immobilization of antibodies in high density and specificity. In addition to the multi-step immobilization of immunoglobulin G(IgG) on photochemically crosslinked and oxidized polymer films, the direct one-step transfer of mixed antibody-polymer films is performed. Both planar waveguides and optical fibers are suitable substrates for the immobilization. The activity and specificity of immobilized antibodies is controlled by the enzyme-linked immunosorbent assay (ELISA) technique. As a result reduced non-specific interactions between antigens and the substrate surface are observed if cinnamoylbutyether-cellulose is used as the film matrix for the antibody immobilization. Using the evanescent wave senor (EWS) technology immunosensor assays are performed in order to determine both the non-specific adsorption of different coated polymethylmethacrylat (PMMA) fibers and the long-term stability of the antibody films. Specificities of one-step transferred IgG-cellulose films are drastically enhanced compared to IgG-copolyglutamate films. Cellulose IgG films are used in enzymatic sandwich assays using mucine as a clinical relevant antigen that is recognized by the antibodies BM2 and BM7. A mucine calibration measurement is recorded. So far the observed detection limit for mucine is about 8 ng/ml.
Automated intensity descent algorithm for interpretation of complex high-resolution mass spectra.
Chen, Li; Sze, Siu Kwan; Yang, He
2006-07-15
This paper describes a new automated intensity descent algorithm for analysis of complex high-resolution mass spectra. The algorithm has been successfully applied to interpret Fourier transform mass spectra of proteins; however, it should be generally applicable to complex high-resolution mass spectra of large molecules recorded by other instruments. The algorithm locates all possible isotopic clusters by a novel peak selection method and a robust cluster subtraction technique according to the order of descending peak intensity after global noise level estimation and baseline correction. The peak selection method speeds up charge state determination and isotopic cluster identification. A Lorentzian-based peak subtraction technique resolves overlapping clusters in high peak density regions. A noise flag value is introduced to minimize false positive isotopic clusters. Moreover, correlation coefficients and matching errors between the identified isotopic multiplets and the averagine isotopic abundance distribution are the criteria for real isotopic clusters. The best fitted averagine isotopic abundance distribution of each isotopic cluster determines the charge state and the monoisotopic mass. Three high-resolution mass spectra were interpreted by the program. The results show that the algorithm is fast in computational speed, robust in identification of overlapping clusters, and efficient in minimization of false positives. In approximately 2 min, the program identified 611 isotopic clusters for a plasma ECD spectrum of carbonic anhydrase. Among them, 50 new identified isotopic clusters, which were missed previously by other methods, have been discovered in the high peak density regions or as weak clusters by this algorithm. As a result, 18 additional new bond cleavages have been identified from the 50 new clusters of carbonic anhydrase.
A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm
Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah
2015-01-01
A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974
Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows
NASA Technical Reports Server (NTRS)
Baker, A. J.; Freels, J. D.
1989-01-01
A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA.more » Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less
An infrared small target detection algorithm based on high-speed local contrast method
NASA Astrophysics Data System (ADS)
Cui, Zheng; Yang, Jingli; Jiang, Shouda; Li, Junbao
2016-05-01
Small-target detection in infrared imagery with a complex background is always an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate, and speed. However, current algorithms usually improve one or two of the detection capabilities while sacrificing the other. In this letter, an Infrared (IR) small target detection algorithm with two layers inspired by Human Visual System (HVS) is proposed to balance those detection capabilities. The first layer uses high speed simplified local contrast method to select significant information. And the second layer uses machine learning classifier to separate targets from background clutters. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.
Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows
NASA Technical Reports Server (NTRS)
Baker, A. J.; Freels, J. D.
1989-01-01
A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.
Representation of high frequency Space Shuttle data by ARMA algorithms and random response spectra
NASA Technical Reports Server (NTRS)
Spanos, P. D.; Mushung, L. J.
1990-01-01
High frequency Space Shuttle lift-off data are treated by autoregressive (AR) and autoregressive-moving-average (ARMA) digital algorithms. These algorithms provide useful information on the spectral densities of the data. Further, they yield spectral models which lend themselves to incorporation to the concept of the random response spectrum. This concept yields a reasonably smooth power spectrum for the design of structural and mechanical systems when the available data bank is limited. Due to the non-stationarity of the lift-off event, the pertinent data are split into three slices. Each of the slices is associated with a rather distinguishable phase of the lift-off event, where stationarity can be expected. The presented results are rather preliminary in nature; it is aimed to call attention to the availability of the discussed digital algorithms and to the need to augment the Space Shuttle data bank as more flights are completed.
NASA Technical Reports Server (NTRS)
Fatemi, Emad; Jerome, Joseph; Osher, Stanley
1989-01-01
A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.
A high order spectral algorithm for elastic obstacle scattering in three dimensions
NASA Astrophysics Data System (ADS)
Le Louër, Frédérique
2014-12-01
In this paper we describe a high order spectral algorithm for solving the time-harmonic Navier equations in the exterior of a bounded obstacle in three space dimensions, with Dirichlet or Neumann boundary conditions. Our approach is based on combined-field boundary integral equation (CFIE) reformulations of the Navier equations. We extend the spectral method developed by Ganesh and Hawkins [20] - for solving second kind boundary integral equations in electromagnetism - to linear elasticity for solving CFIEs that commonly involve integral operators with a strongly singular or hypersingular kernel. The numerical scheme applies to boundaries which are globally parametrised by spherical coordinates. The algorithm has the interesting feature that it leads to solve linear systems with substantially fewer unknowns than with other existing fast methods. The computational performances of the proposed spectral algorithm are demonstrated using numerical examples for a variety of three-dimensional convex and non-convex smooth obstacles.
Wang, C. L.
2016-05-15
Three high-resolution positioning methods based on the FluoroBancroft linear-algebraic method [S. B. Andersson, Opt. Express 16, 18714 (2008)] are proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function, the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. After taking the super-Poissonian photon noise into account, the proposed algorithms give an average of 0.03-0.08 pixel position error much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. These improvements will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.
Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation
NASA Astrophysics Data System (ADS)
Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei
2016-11-01
Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry.
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution.
Validation of Map Matching Algorithms using High Precision Positioning with GPS
NASA Astrophysics Data System (ADS)
Quddus, Mohammed A.; Noland, Robert B.; Ochieng, Washington Y.
2005-05-01
Map Matching (MM) algorithms are usually employed for a range of transport telematics applications to correctly identify the physical location of a vehicle travelling on a road network. Two essential components for MM algorithms are (1) navigation sensors such as the Global Positioning System (GPS) and dead reckoning (DR), among others, to estimate the position of the vehicle, and (2) a digital base map for spatial referencing of the vehicle location. Previous research by the authors (Quddus et al., 2003; Ochieng et al., 2003) has developed improved MM algorithms that take account of the vehicle speed and the error sources associated with the navigation sensors and the digital map data previously ignored in conventional MM approaches. However, no validation study assessing the performance of MM algorithms has been presented in the literature. This paper describes a generic validation strategy and results for the MM algorithm previously developed in Ochieng et al. (2003). The validation technique is based on a higher accuracy reference (truth) of the vehicle trajectory as determined by high precision positioning achieved by the carrier-phase observable from GPS. The results show that the vehicle positions determined from the MM results are within 6 m of the true positions. The results also demonstrate the importance of the quality of the digital map data to the map matching process.
Highly active and selective endopeptidases with programmed substrate specificities
Varadarajan, Navin; Rodriguez, Sarah; Hwang, Bum-Yeol; Georgiou, George; Iverson, Brent L
2009-01-01
A family of engineered endopeptidases has been created that is capable of cleaving a diverse array of peptide sequences with high selectivity and catalytic efficiency (kcat/KM > 104 M−1 s−1). By screening libraries with a selection-counterselection substrate method, protease variants were programmed to recognize amino acids having altered charge, size and hydrophobicity properties adjacent to the scissile bond of the substrate, including Glu↓Arg, a specificity that to our knowledge has not been observed among natural proteases. Members of this artificial protease family resulted from a relatively small number of amino acid substitutions that (at least in one case) proved to be epistatic. PMID:18391948
The EM algorithm applied to the search of high energy neutrino sources
NASA Astrophysics Data System (ADS)
Aguilar, J. A.; Hernández-Rey, J. J.
The detection of astrophysical sources of high energy neutrinos is one of the most interesting quests in modern astrophysics. Unlike gamma and X-ray observations, the low number of signal events expected in high energy neutrino telescopes, constrains significantly the discovery probability of the sources. New algorithms to disentangle clusters of small number events from the background events are required. In this contribution, we will explore the potentiality of the Expectation-Maximization algorithm to the search of point-like source with a generic Kilometre-Scale neutrino telescope located in the Mediterranean sea. The EM algorithm is a widely used algorithm in the clustering analysis. This method can also be applied to the search of nearby ultra-high energy cosmic rays sources from ground detection infrastructures. Complexity arising from the low statistics will be described as well as the results compared to the well-known binning technique applied in this kind of experiments and developed firstly by the MACRO collaboration.
Highly specific urine-based marker of bladder cancer.
Van Le, Thu-Suong; Miller, Raymond; Barder, Timothy; Babjuk, Marko; Potter, Douglas M; Getzenberg, Robert H
2005-12-01
Bladder cancer represents a major health problem throughout the world, but advances in tumor biomarker development are revolutionizing how physicians diagnose the disease. We previously used an indirect immunoassay to demonstrate that the bladder cancer specific biomarker, BLCA-4, is present in urine samples from patients with bladder cancer, but not in samples from healthy individuals. In this study, a sandwich immunoassay was used to measure BLCA-4 in urine samples from patient populations with various urologic conditions and healthy individuals. Urine was collected from healthy individuals and from patients with bladder cancer, benign urologic conditions, or prostate cancer. BLCA-4 levels were evaluated by a sandwich immunoassay using two antibodies directed against distinct epitopes on BLCA-4. Using a prospectively determined cutoff of an absorbance unit (OD) of 0.04, 67 of the 75 samples from patients with bladder cancer were positive for BLCA-4, resulting in an assay sensitivity of 89%. Also, 62 of the 65 samples from individuals without bladder cancer were negative for BLCA-4, resulting in an assay specificity of 95%. The high sensitivity and specificity of the sandwich BLCA-4 immunoassay may allow for earlier detection and treatment of disease, thus greatly improving patient care.
Design optimization of a high specific speed Francis turbine runner
NASA Astrophysics Data System (ADS)
Enomoto, Y.; Kurosawa, S.; Kawajiri, H.
2012-11-01
Francis turbine is used in many hydroelectric power stations. This paper presents the development of hydraulic performance in a high specific speed Francis turbine runner. In order to achieve the improvements of turbine efficiency throughout a wide operating range, a new runner design method which combines the latest Computational Fluid Dynamics (CFD) and a multi objective optimization method with an existing design system was applied in this study. The validity of the new design system was evaluated by model performance tests. As the results, it was confirmed that the optimized runner presented higher efficiency compared with an originally designed runner. Besides optimization of runner, instability vibration which occurred at high part load operating condition was investigated by model test and gas-liquid two-phase flow analysis. As the results, it was confirmed that the instability vibration was caused by oval cross section whirl which was caused by recirculation flow near runner cone wall.
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Flexible and Lightweight Fuel Cell with High Specific Power Density.
Ning, Fandi; He, Xudong; Shen, Yangbin; Jin, Hehua; Li, Qingwen; Li, Da; Li, Shuping; Zhan, Yulu; Du, Ying; Jiang, Jingjing; Yang, Hui; Zhou, Xiaochun
2017-06-27
Flexible devices have been attracting great attention recently due to their numerous advantages. But the energy densities of current energy sources are still not high enough to support flexible devices for a satisfactory length of time. Although proton exchange membrane fuel cells (PEMFCs) do have a high-energy density, traditional PEMFCs are usually too heavy, rigid, and bulky to be used in flexible devices. In this research, we successfully invented a light and flexible air-breathing PEMFC by using a new design of PEMFC and a flexible composite electrode. The flexible air-breathing PEMFC with 1 × 1 cm(2) working area can be as light as 0.065 g and as thin as 0.22 mm. This new PEMFC exhibits an amazing specific volume power density as high as 5190 W L(-1), which is much higher than traditional (air-breathing) PEMFCs. Also outstanding is that the flexible PEMFC retains 89.1% of its original performance after being bent 600 times, and it retains its original performance after being dropped five times from a height of 30 m. Moreover, the research has demonstrated that when stacked, the flexible PEMFCs are also useful in mobile applications such as mobile phones. Therefore, our research shows that PEMFCs can be made light, flexible, and suitable for applications in flexible devices. These innovative flexible PEMFCs may also notably advance the progress in the PEMFC field, because flexible PEMFCs can achieve high specific power density with small size, small volume, low weight, and much lower cost; they are also much easier to mass produce.
Efficiency Analysis of a High-Specific Impulse Hall Thruster
NASA Technical Reports Server (NTRS)
Jacobson, David (Technical Monitor); Hofer, Richard R.; Gallimore, Alec D.
2004-01-01
Performance and plasma measurements of the high-specific impulse NASA-173Mv2 Hall thruster were analyzed using a phenomenological performance model that accounts for a partially-ionized plasma containing multiply-charged ions. Between discharge voltages of 300 to 900 V, the results showed that although the net decrease of efficiency due to multiply-charged ions was only 1.5 to 3.0 percent, the effects of multiply-charged ions on the ion and electron currents could not be neglected. Between 300 to 900 V, the increase of the discharge current was attributed to the increasing fraction of multiply-charged ions, while the maximum deviation of the electron current from its average value was only +5/-14 percent. These findings revealed how efficient operation at high-specific impulse was enabled through the regulation of the electron current with the applied magnetic field. Between 300 to 900 V, the voltage utilization ranged from 89 to 97 percent, the mass utilization from 86 to 90 percent, and the current utilization from 77 to 81 percent. Therefore, the anode efficiency was largely determined by the current utilization. The electron Hall parameter was nearly constant with voltage, decreasing from an average of 210 at 300 V to an average of 160 between 400 to 900 V. These results confirmed our claim that efficient operation can be achieved only over a limited range of Hall parameters.
Zhang, Jingjing; Friberg, Ida M; Kift-Morgan, Ann; Parekh, Gita; Morgan, Matt P; Liuzzi, Anna Rita; Lin, Chan-Yu; Donovan, Kieron L; Colmont, Chantal S; Morgan, Peter H; Davis, Paul; Weeks, Ian; Fraser, Donald J; Topley, Nicholas; Eberl, Matthias
2017-03-16
The immune system has evolved to sense invading pathogens, control infection, and restore tissue integrity. Despite symptomatic variability in patients, unequivocal evidence that an individual's immune system distinguishes between different organisms and mounts an appropriate response is lacking. We here used a systematic approach to characterize responses to microbiologically well-defined infection in a total of 83 peritoneal dialysis patients on the day of presentation with acute peritonitis. A broad range of cellular and soluble parameters was determined in peritoneal effluents, covering the majority of local immune cells, inflammatory and regulatory cytokines and chemokines as well as tissue damage-related factors. Our analyses, utilizing machine-learning algorithms, demonstrate that different groups of bacteria induce qualitatively distinct local immune fingerprints, with specific biomarker signatures associated with Gram-negative and Gram-positive organisms, and with culture-negative episodes of unclear etiology. Even more, within the Gram-positive group, unique immune biomarker combinations identified streptococcal and non-streptococcal species including coagulase-negative Staphylococcus spp. These findings have diagnostic and prognostic implications by informing patient management and treatment choice at the point of care. Thus, our data establish the power of non-linear mathematical models to analyze complex biomedical datasets and highlight key pathways involved in pathogen-specific immune responses.
Lv, Yi; Zuo, Zhixiang; Xu, Xiao
2013-10-01
Pre-mRNA splicing is a crucial step for genetic regulation and accounts largely for downstream translational diversity. The current time of biological research is characterized by advances in functional genomics study and the understanding of the pre-mRNA splicing process has thus become a major portal for biologists to gain insights into the complex gene regulatory mechanism. The intranuclear alternative splicing process can form a variety of genomic transcripts that modulate the growth and development of an organism, particularly in the immune and neural systems. In the current study, we investigated and identified alternative splicing transcripts at different stages of embryonic mouse brain morphogenesis using subtractive cross-screening algorithm. A total of 195 candidate transcripts were found during organogenesis; 1629 identified at fetus stage, 116 in juvenile and 148 transcripts from adulthood. To document our findings, we developed a database named DMBAS, which can be accessed through the link: http://173.234.48.5/DMBAS. We further investigated the alternative splicing products obtained in our experiment and noted the existence of chromosome preference between prenatal and postnatal transcripts. Additionally, the distribution of splicing sites and the splicing types were found to have distinct genomic features at varying stages of brain development. The majority of identified alternative splices (72.3%) at fetus stage were confirmed later using separate RNA-seq data sets. This study is a comprehensive profiling of alternative splicing transcripts of mouse brain morphogenesis using advanced computational algorithm. A series of developmental stage specific transcripts, as well as their splicing sites and chromosome preferences were revealed in the current study. Our findings and the related online database would form a solid foundation for studies of broader biological significance and paved the way for future investigations in relevant human brain diseases
Tighe, Patrick J; Harle, Christopher A; Hurley, Robert W; Aytug, Haldun; Boezaart, Andre P; Fillingim, Roger B
2015-07-01
Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8,071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor (k-NN), with logistic regression included for baseline comparison. In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-NN algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. Wiley Periodicals, Inc.
Tighe, Patrick J.; Harle, Christopher A.; Hurley, Robert W.; Aytug, Haldun; Boezaart, Andre P.; Fillingim, Roger B.
2015-01-01
Background Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Methods Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor, with logistic regression included for baseline comparison. Results In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-nearest neighbor algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Conclusions Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. PMID:26031220
A high-precision digital integrator based on the Romberg algorithm.
Li, Zhen-Hua; Hu, Wei-Zhong
2017-04-01
An integrator is widely used for measurement in the field of power systems, and it is a key technology in signal processing. According to research on the digital integrator based on the traditional Newton-Cotes algorithm, the high-frequency response of the low-order Cotes formula is usually poor and the design of the transfer function introduced by the high-order Cotes formula is too complex. In this paper, we analyze the error between the composite Newton-Cotes algorithm and the ideal transfer function. One signal was sampled using the normal sampling frequency and the other signal was sampled using half the normal sampling frequency. The two signals were weighted based on the Romberg algorithm. Thus, the precision of the digital integrator was improved, and the design difficulty was reduced for algorithms of the same order. The simulation and test results show that the proposed digital integrator has better transient and steady performance, and also has a lower error, which is less than 0.01%.
A high-precision digital integrator based on the Romberg algorithm
NASA Astrophysics Data System (ADS)
Li, Zhen-Hua; Hu, Wei-Zhong
2017-04-01
An integrator is widely used for measurement in the field of power systems, and it is a key technology in signal processing. According to research on the digital integrator based on the traditional Newton-Cotes algorithm, the high-frequency response of the low-order Cotes formula is usually poor and the design of the transfer function introduced by the high-order Cotes formula is too complex. In this paper, we analyze the error between the composite Newton-Cotes algorithm and the ideal transfer function. One signal was sampled using the normal sampling frequency and the other signal was sampled using half the normal sampling frequency. The two signals were weighted based on the Romberg algorithm. Thus, the precision of the digital integrator was improved, and the design difficulty was reduced for algorithms of the same order. The simulation and test results show that the proposed digital integrator has better transient and steady performance, and also has a lower error, which is less than 0.01%.
Postley, John E; Luo, Yanting; Wong, Nathan D; Gardin, Julius M
2015-11-15
Atherosclerotic cardiovascular disease (ASCVD) events are the leading cause of death in the United States and globally. Traditional global risk algorithms may miss 50% of patients who experience ASCVD events. Noninvasive ultrasound evaluation of the carotid and femoral arteries can identify subjects at high risk for ASCVD events. We examined the ability of different global risk algorithms to identify subjects with femoral and/or carotid plaques found by ultrasound. The study population consisted of 1,464 asymptomatic adults (39.8% women) aged 23 to 87 years without previous evidence of ASCVD who had ultrasound evaluation of the carotid and femoral arteries. Three ASCVD risk algorithms (10-year Framingham Risk Score [FRS], 30-year FRS, and lifetime risk) were compared for the 939 subjects who met the algorithm age criteria. The frequency of femoral plaque as the only plaque was 18.3% in the total group and 14.8% in the risk algorithm groups (n = 939) without a significant difference between genders in frequency of femoral plaque as the only plaque. Those identified as high risk by the lifetime risk algorithm included the most men and women who had plaques either femoral or carotid (59% and 55%) but had lower specificity because the proportion of subjects who actually had plaques in the high-risk group was lower (50% and 35%) than in those at high risk defined by the FRS algorithms. In conclusion, ultrasound evaluation of the carotid and femoral arteries can identify subjects at risk of ASCVD events missed by traditional risk-predicting algorithms. The large proportion of subjects with femoral plaque only supports the use of including both femoral and carotid arteries in ultrasound evaluation. Copyright © 2015 Elsevier Inc. All rights reserved.
High-resolution climate data over conterminous US using random forest algorithm
NASA Astrophysics Data System (ADS)
Hashimoto, H.; Nemani, R. R.; Wang, W.
2014-12-01
We developed a new methodology to create high-resolution precipitation data using the random forest algorithm. We have used two approaches: physical downscaling from GCM data using a regional climate model, and interpolation from ground observation data. Physical downscaling method can be applied only for a small region because it is computationally expensive and complex to deploy. On the other hand, interpolation schemes from ground observations do not consider physical processes. In this study, we utilized the random forest algorithm to integrate atmospheric reanalysis data, satellite data, topography data, and ground observation data. First we considered situations where precipitation is same across the domain, largely dominated by storm like systems. We then picked several points to train random forest algorithm. The random forest algorithm estimates out-of-bag errors spatially, and produces the relative importance of each of the input variable.This methodology has the following advantages. (1) The methodology can ingest any spatial dataset to improve downscaling. Even non-precipitation datasets can be ingested such as satellite cloud cover data, radar reflectivity image, or modeled convective available potential energy. (2) The methodology is purely statistical so that physical assumptions are not required. Meanwhile, most of interpolation schemes assume empirical relationship between precipitation and elevation for orographic precipitation. (3) Low quality value in ingested data does not cause critical bias in the results because of the ensemble feature of random forest. Therefore, users do not need to pay a special attention to quality control of input data compared to other interpolation methodologies. (4) Same methodology can be applied to produce other high-resolution climate datasets, such as wind and cloud cover. Those variables are usually hard to be interpolated by conventional algorithms. In conclusion, the proposed methodology can produce reasonable
Specific Heat of High Temperature Superconductors: a Review
NASA Astrophysics Data System (ADS)
Junod, Alain
The following sections are included: * INTRODUCTION * EXPERIMENTAL * LATTICE SPECIFIC HEAT * NORMAL-STATE ELECTRONIC SPECIFIC HEAT * SUPERCONDUCTING STATE * BEHAVIOR AT T→0 * CONCLUSION * ACKNOWLEDGEMENTS * APPENDIX * REFERENCES
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
A highly specific coding system for structural chromosomal alterations.
Martínez-Frías, M L; Martínez-Fernández, M L
2013-04-01
The Spanish Collaborative Study of Congenital Malformations (ECEMC, from the name in Spanish) has developed a very simple and highly specific coding system for structural chromosomal alterations. Such a coding system would be of value at present due to the dramatic increase in the diagnosis of submicroscopic chromosomal deletions and duplications through molecular techniques. In summary, our new coding system allows the characterization of: (a) the type of structural anomaly; (b) the chromosome affected; (c) if the alteration affects the short or/and the long arm, and (d) if it is a non-pure dicentric, a non-pure isochromosome, or if it affects several chromosomes. We show the distribution of 276 newborn patients with these types of chromosomal alterations using their corresponding codes according to our system. We consider that our approach may be useful not only for other registries, but also for laboratories performing these studies to store their results on case series. Therefore, the aim of this article is to describe this coding system and to offer the opportunity for this coding to be applied by others. Moreover, as this is a SYSTEM, rather than a fixed code, it can be implemented with the necessary modifications to include the specific objectives of each program. Copyright © 2013 Wiley Periodicals, Inc.
A Very-High-Specific-Impulse Relativistic Laser Thruster
Horisawa, Hideyuki; Kimura, Itsuro
2008-04-28
Characteristics of compact laser plasma accelerators utilizing high-power laser and thin-target interaction were reviewed as a potential candidate of future spacecraft thrusters capable of generating relativistic plasma beams for interstellar missions. Based on the special theory of relativity, motion of the relativistic plasma beam exhausted from the thruster was formulated. Relationships of thrust, specific impulse, input power and momentum coupling coefficient for the relativistic plasma thruster were derived. It was shown that under relativistic conditions, the thrust could be extremely large even with a small amount of propellant flow rate. Moreover, it was shown that for a given value of input power thrust tended to approach the value of the photon rocket under the relativistic conditions regardless of the propellant flow rate.
Schuster, Tibor; Pang, Menglan; Platt, Robert W
2015-09-01
The high-dimensional propensity score algorithm attempts to improve control of confounding in typical treatment effect studies in pharmacoepidemiology and is increasingly being used for the analysis of large administrative databases. Within this multi-step variable selection algorithm, the marginal prevalence of non-zero covariate values is considered to be an indicator for a count variable's potential confounding impact. We investigate the role of the marginal prevalence of confounder variables on potentially caused bias magnitudes when estimating risk ratios in point exposure studies with binary outcomes. We apply the law of total probability in conjunction with an established bias formula to derive and illustrate relative bias boundaries with respect to marginal confounder prevalence. We show that maximum possible bias magnitudes can occur at any marginal prevalence level of a binary confounder variable. In particular, we demonstrate that, in case of rare or very common exposures, low and high prevalent confounder variables can still have large confounding impact on estimated risk ratios. Covariate pre-selection by prevalence may lead to sub-optimal confounder sampling within the high-dimensional propensity score algorithm. While we believe that the high-dimensional propensity score has important benefits in large-scale pharmacoepidemiologic studies, we recommend omitting the prevalence-based empirical identification of candidate covariates. Copyright © 2015 John Wiley & Sons, Ltd.
A novel robust and efficient algorithm for charge particle tracking in high background flux
NASA Astrophysics Data System (ADS)
Fanelli, C.; Cisbani, E.; Del Dotto, A.
2015-05-01
The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 1039cm-2s-1. To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results.
Low-complexity, high-speed, and high-dynamic range time-to-impact algorithm
NASA Astrophysics Data System (ADS)
Åström, Anders; Forchheimer, Robert
2012-10-01
We present a method suitable for a time-to-impact sensor. Inspired by the seemingly "low" complexity of small insects, we propose a new approach to optical flow estimation that is the key component in time-to-impact estimation. The approach is based on measuring time instead of the apparent motion of points in the image plane. The specific properties of the motion field in the time-to-impact application are used, such as measuring only along a one-dimensional (1-D) line and using simple feature points, which are tracked from frame to frame. The method lends itself readily to be implemented in a parallel processor with an analog front-end. Such a processing concept [near-sensor image processing (NSIP)] was described for the first time in 1983. In this device, an optical sensor array and a low-level processing unit are tightly integrated into a hybrid analog-digital device. The high dynamic range, which is a key feature of NSIP, is used to extract the feature points. The output from the device consists of a few parameters, which will give the time-to-impact as well as possible transversal speed for off-centered viewing. Performance and complexity aspects of the implementation are discussed, indicating that time-to-impact data can be achieved at a rate of 10 kHz with today's technology.
Supercomputer implementation of finite element algorithms for high speed compressible flows
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Ramakrishnan, R.
1986-01-01
Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes.
Algorithm for Automatic Behavior Quantification of Laboratory Mice Using High-Frame-Rate Videos
NASA Astrophysics Data System (ADS)
Nie, Yuman; Takaki, Takeshi; Ishii, Idaku; Matsuda, Hiroshi
In this paper, we propose an algorithm for automatic behavior quantification in laboratory mice to quantify several model behaviors. The algorithm can detect repetitive motions of the fore- or hind-limbs at several or dozens of hertz, which are too rapid for the naked eye, from high-frame-rate video images. Multiple repetitive motions can always be identified from periodic frame-differential image features in four segmented regions — the head, left side, right side, and tail. Even when a mouse changes its posture and orientation relative to the camera, these features can still be extracted from the shift- and orientation-invariant shape of the mouse silhouette by using the polar coordinate system and adjusting the angle coordinate according to the head and tail positions. The effectiveness of the algorithm is evaluated by analyzing long-term 240-fps videos of four laboratory mice for six typical model behaviors: moving, rearing, immobility, head grooming, left-side scratching, and right-side scratching. The time durations for the model behaviors determined by the algorithm have detection/correction ratios greater than 80% for all the model behaviors. This shows good quantification results for actual animal testing.
A Jitter-Mitigating High Gain Antenna Pointing Algorithm for the Solar Dynamics Observatory
NASA Technical Reports Server (NTRS)
Bourkland, Kristin L.; Liu, Kuo-Chia; Blaurock, Carl
2007-01-01
This paper details a High Gain Antenna (HGA) pointing algorithm which mitigates jitter during the motion of the antennas on the Solar Dynamics Observatory (SDO) spacecraft. SDO has two HGAs which point towards the Earth and send data to a ground station at a high rate. These antennas are required to track the ground station during the spacecraft Inertial and Science modes, which include periods of inertial Sunpointing as well as calibration slews. The HGAs also experience handoff seasons, where the antennas trade off between pointing at the ground station and pointing away from the Earth. The science instruments on SDO require fine Sun pointing and have a very low jitter tolerance. Analysis showed that the nominal tracking and slewing motions of the antennas cause enough jitter to exceed the HGA portion of the jitter budget. The HGA pointing control algorithm was expanded from its original form as a means to mitigate the jitter.
Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems
NASA Technical Reports Server (NTRS)
Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy
2007-01-01
High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.
Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems
NASA Technical Reports Server (NTRS)
Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy
2007-01-01
High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.
Parallel technology for numerical modeling of fluid dynamics problems by high-accuracy algorithms
NASA Astrophysics Data System (ADS)
Gorobets, A. V.
2015-04-01
A parallel computation technology for modeling fluid dynamics problems by finite-volume and finite-difference methods of high accuracy is presented. The development of an algorithm, the design of a software implementation, and the creation of parallel programs for computations on large-scale computing systems are considered. The presented parallel technology is based on a multilevel parallel model combining various types of parallelism: with shared and distributed memory and with multiple and single instruction streams to multiple data flows.
Hasan, Laiq; Al-Ars, Zaid
2009-01-01
In this paper, we present an efficient and high performance linear recursive variable expansion (RVE) implementation of the Smith-Waterman (S-W) algorithm and compare it with a traditional linear systolic array implementation. The results demonstrate that the linear RVE implementation performs up to 2.33 times better than the traditional linear systolic array implementation, at the cost of utilizing 2 times more resources.
Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application
2016-02-26
the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...simulation, domain decomposition, CFD, gappy data , estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...smooth in time, the temporal estimation based on previous saved data can give a highly accurate result on a missing part of the solution. To accomplish
Nanoporous ultra-high specific surface inorganic fibres
NASA Astrophysics Data System (ADS)
Kanehata, Masaki; Ding, Bin; Shiratori, Seimei
2007-08-01
Nanoporous inorganic (silica) nanofibres with ultra-high specific surface have been fabricated by electrospinning the blend solutions of poly(vinyl alcohol) (PVA) and colloidal silica nanoparticles, followed by selective removal of the PVA component. The configurations of the composite and inorganic nanofibres were investigated by changing the average silica particle diameters and the concentrations of colloidal silica particles in polymer solutions. After the removal of PVA by calcination, the fibre shape of pure silica particle assembly was maintained. The nanoporous silica fibres were assembled as a porous membrane with a high surface roughness. From the results of Brunauer-Emmett-Teller (BET) measurements, the BET surface area of inorganic silica nanofibrous membranes was increased with the decrease of the particle diameters. The membrane composed of silica particles with diameters of 15 nm showed the largest BET surface area of 270.3 m2 g-1 and total pore volume of 0.66 cm3 g-1. The physical absorption of methylene blue dye molecules by nanoporous silica membranes was examined using UV-vis spectrometry. Additionally, the porous silica membranes modified with fluoroalkylsilane showed super-hydrophobicity due to their porous structures.
Plasmoid Thruster for High Specific-Impulse Propulsion
NASA Technical Reports Server (NTRS)
Fimognari, Peter; Eskridge, Richard; Martin, Adam; Lee, Michael
2007-01-01
A report discusses a new multi-turn, multi-lead design for the first generation PT-1 (Plasmoid Thruster) that produces thrust by expelling plasmas with embedded magnetic fields (plasmoids) at high velocities. This thruster is completely electrodeless, capable of using in-situ resources, and offers efficiencies as high as 70 percent at a specific impulse, I(sub sp), of up to 8,000 s. This unit consists of drive and bias coils wound around a ceramic form, and the capacitor bank and switches are an integral part of the assembly. Multiple thrusters may be gauged to inductively recapture unused energy to boost efficiency and to increase the repetition rate, which, in turn increases the average thrust of the system. The thruster assembly can use storable propellants such as H2O, ammonia, and NO, among others. Any available propellant gases can be used to produce an I(sub sp) in the range of 2,000 to 8,000 s with a single-stage thruster. These capabilities will allow the transport of greater payloads to outer planets, especially in the case of an I(sub sp) greater than 6,000 s.
NASA Astrophysics Data System (ADS)
Schwenk, Kurt; Huber, Felix
2015-10-01
Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded
Design and algorithm research of high precision airborne infrared touch screen
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan
2016-10-01
There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.
MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Hao; Li, Na; Xu, Shiyou; Chen, Zengping
2014-10-01
Migration through resolution cells (MTRC) is generated in high-resolution inverse synthetic aperture radar (ISAR) imaging. A MTRC compensation algorithm for high-resolution ISAR imaging based on improved polar format algorithm (PFA) is proposed in this paper. Firstly, in the situation that a rigid-body target stably flies, the initial value of the rotation angle and center of the target is obtained from the rotation of radar line of sight (RLOS) and high range resolution profile (HRRP). Then, the PFA is iteratively applied to the echo data to search the optimization solution based on minimum entropy criterion. The procedure starts with the estimated initial rotation angle and center, and terminated when the entropy of the compensated ISAR image is minimized. To reduce the computational load, the 2-D iterative search is divided into two 1-D search. One is carried along the rotation angle and the other one is carried along rotation center. Each of the 1-D searches is realized by using of the golden section search method. The accurate rotation angle and center can be obtained when the iterative search terminates. Finally, apply the PFA to compensate the MTRC by the use of the obtained optimized rotation angle and center. After MTRC compensation, the ISAR image can be best focused. Simulated and real data demonstrate the effectiveness and robustness of the proposed algorithm.
New temporal high-pass filter non-uniformity correction algorithm based on change detection
NASA Astrophysics Data System (ADS)
Li, Hang; Zhou, Xiao; Hu, Ruo-lan; Jia, Jun-tao; Zhang, Gui-lin
2009-07-01
The spatial non-uniformity in the photo-response of the detectors in the array on the focal-plane array (FPA) Infrared imaging systems restricted the infrared applications. In this paper, we improve the method of temporal high-pass filter for the complex real scene sequence. Firstly, it adopts the one point non-uniformity correction algorithm which calibrates the FPA at distinct temperatures by use of flat-field data generated from a black-body radiation source. It realized simply and it compensated for the spatial non-uniformity coarsely. After this step, the convergent time of the temporal high-pass algorithm was reduced, and the grade of ghost-shading was alleviated. And then, we analyze the pixels of images and classify them to two categories. One is the changed pixels, another is the stillness pixels. For different kinds of pixels, deal with different strategies. For the changed pixels, estimate the offset with the temporal high-pass filter algorithm. For the stillness pixels, estimate the offset with the iterative steps. This strategy reduced the grade of scene-vanishing when scene was stillness, and the grade of ghost-shading when target moving fast after stillness. Testing on the real infrared image sequence, the experiments showed that this method was very promising.
Vibration extraction based on fast NCC algorithm and high-speed camera.
Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an
2015-09-20
In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.
[Lossless compression of high sampling rate ECG data based on BW algorithm].
Tian, Feng; Rao, Nini; Cheng, Yu; Xu, Shanglei
2008-08-01
Now researches of ECG data compression mainly focus on compressing the ECG data of low sampling rate. A BW-based high sampling rate ECG data lossless compression algorithm is proposed in this paper. We apply difference operation to the original ECG data first and take part of the 16-bit binary differential value as 8-bit binary. Then the differential results are coded with the move-to-front coding method in order to make the same characters centralizing in a certain area. Last, we gain a high compression ratio by using the arithmetic coding method further. Our experimental results indicate that this is an efficient lossless compression method suitable for body surface ECG data as well as for heart ECG data. The average compression ratios come up to 3.547 and 3.608, respectively. By comparison with current ECG compression algorithms, our algorithm has gained much improvement in terms of the compression ratio, especially when applied to the high sampling rate ECG data.
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
High voltage and high specific capacity dual intercalating electrode Li-ion batteries
NASA Technical Reports Server (NTRS)
West, William C. (Inventor); Blanco, Mario (Inventor)
2010-01-01
The present invention provides high capacity and high voltage Li-ion batteries that have a carbonaceous cathode and a nonaqueous electrolyte solution comprising LiF salt and an anion receptor that binds the fluoride ion. The batteries can comprise dual intercalating electrode Li ion batteries. Methods of the present invention use a cathode and electrode pair, wherein each of the electrodes reversibly intercalate ions provided by a LiF salt to make a high voltage and high specific capacity dual intercalating electrode Li-ion battery. The present methods and systems provide high-capacity batteries particularly useful in powering devices where minimizing battery mass is important.
Dhou, S; Williams, C; Lewis, J
2016-06-15
Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported
Arnold, David T; Rowen, Donna; Versteegh, Matthijs M; Morley, Anna; Hooper, Clare E; Maskell, Nicholas A
2015-01-23
In order to estimate utilities for cancer studies where the EQ-5D was not used, the EORTC QLQ-C30 can be used to estimate EQ-5D using existing mapping algorithms. Several mapping algorithms exist for this transformation, however, algorithms tend to lose accuracy in patients in poor health states. The aim of this study was to test all existing mapping algorithms of QLQ-C30 onto EQ-5D, in a dataset of patients with malignant pleural mesothelioma, an invariably fatal malignancy where no previous mapping estimation has been published. Health related quality of life (HRQoL) data where both the EQ-5D and QLQ-C30 were used simultaneously was obtained from the UK-based prospective observational SWAMP (South West Area Mesothelioma and Pemetrexed) trial. In the original trial 73 patients with pleural mesothelioma were offered palliative chemotherapy and their HRQoL was assessed across five time points. This data was used to test the nine available mapping algorithms found in the literature, comparing predicted against observed EQ-5D values. The ability of algorithms to predict the mean, minimise error and detect clinically significant differences was assessed. The dataset had a total of 250 observations across 5 timepoints. The linear regression mapping algorithms tested generally performed poorly, over-estimating the predicted compared to observed EQ-5D values, especially when observed EQ-5D was below 0.5. The best performing algorithm used a response mapping method and predicted the mean EQ-5D with accuracy with an average root mean squared error of 0.17 (Standard Deviation; 0.22). This algorithm reliably discriminated between clinically distinct subgroups seen in the primary dataset. This study tested mapping algorithms in a population with poor health states, where they have been previously shown to perform poorly. Further research into EQ-5D estimation should be directed at response mapping methods given its superior performance in this study.
A Grid Algorithm for High Throughput Fitting of Dose-Response Curve Data
Wang, Yuhong; Jadhav, Ajit; Southal, Noel; Huang, Ruili; Nguyen, Dac-Trung
2010-01-01
We describe a novel algorithm, Grid algorithm, and the corresponding computer program for high throughput fitting of dose-response curves that are described by the four-parameter symmetric logistic dose-response model. The Grid algorithm searches through all points in a grid of four dimensions (parameters) and finds the optimum one that corresponds to the best fit. Using simulated dose-response curves, we examined the Grid program’s performance in reproducing the actual values that were used to generate the simulated data and compared it with the DRC package for the language and environment R and the XLfit add-in for Microsoft Excel. The Grid program was robust and consistently recovered the actual values for both complete and partial curves with or without noise. Both DRC and XLfit performed well on data without noise, but they were sensitive to and their performance degraded rapidly with increasing noise. The Grid program is automated and scalable to millions of dose-response curves, and it is able to process 100,000 dose-response curves from high throughput screening experiment per CPU hour. The Grid program has the potential of greatly increasing the productivity of large-scale dose-response data analysis and early drug discovery processes, and it is also applicable to many other curve fitting problems in chemical, biological, and medical sciences. PMID:21331310
A high precision position sensor design and its signal processing algorithm for a maglev train.
Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen
2012-01-01
High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.
A High Precision Position Sensor Design and Its Signal Processing Algorithm for a Maglev Train
Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen
2012-01-01
High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582
A grid algorithm for high throughput fitting of dose-response curve data.
Wang, Yuhong; Jadhav, Ajit; Southal, Noel; Huang, Ruili; Nguyen, Dac-Trung
2010-10-21
We describe a novel algorithm, Grid algorithm, and the corresponding computer program for high throughput fitting of dose-response curves that are described by the four-parameter symmetric logistic dose-response model. The Grid algorithm searches through all points in a grid of four dimensions (parameters) and finds the optimum one that corresponds to the best fit. Using simulated dose-response curves, we examined the Grid program's performance in reproducing the actual values that were used to generate the simulated data and compared it with the DRC package for the language and environment R and the XLfit add-in for Microsoft Excel. The Grid program was robust and consistently recovered the actual values for both complete and partial curves with or without noise. Both DRC and XLfit performed well on data without noise, but they were sensitive to and their performance degraded rapidly with increasing noise. The Grid program is automated and scalable to millions of dose-response curves, and it is able to process 100,000 dose-response curves from high throughput screening experiment per CPU hour. The Grid program has the potential of greatly increasing the productivity of large-scale dose-response data analysis and early drug discovery processes, and it is also applicable to many other curve fitting problems in chemical, biological, and medical sciences.
Using evolutionary algorithms for fitting high-dimensional models to neuronal data.
Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley
2012-04-01
In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.
High-Precision Orbital-Mechanics Computation Using the Parker-Sochaki Algorithm
NASA Astrophysics Data System (ADS)
Rudmin, Joseph W.
1997-04-01
This paper presents the first application of the Parker-Sochacki algorithm (G. Edgar Parker and James S. Sochacki, Neural, Parallel and Scientific Computations) 4 (1997) 97- 112. to the orbital mechanics of solar system. The Parker- Sochacki algorithm is a method of solving simultaneous differential equations, and is an extension of the Picard Iteration. It permits the generation of the coefficients of the MacLaurin Series quickly and to arbitrarily high order, limited only by the digital accuracy of the programming language. Each coefficient is calculated only once, with no later corrections being made. Taylor series coefficients can also be calculated by this method for the position and coordinates of the the center of mass and for the energy and angular momentum components. These coefficients show conservation to about one part in ten to the eighteenth for energy for a time step of 30 days.
A high-storage capacity content-addressable memory and its learning algorithm
Verleysen, M.; Sirletti, B.; Vandemeulebroecke, A.; Jespers, P.G.A. )
1989-05-01
Hopfield's neural networks show retrieval and speed capabilities that make them good candidates for content-addressable memories (CAM's) in problems such as pattern recognition and optimization. This paper presents a new implementation of a VLSI fully interconnected neural network with only two binary memory points per synapse (the connection weights are restricted to three different values: + 1.0 and -1). The small area of single synaptic cells (about 10/sup 4/ {mu}m/sup 2/) allows the implementation of neural networks with more than 500 neurons. Because of the poor storage capability of Hebb's learning rule, especially in VLSI neural networks where the range of the synapse weights is limited by the number of memory points contained in each connection, a new algorithm is proposed for programming a Hopfield neural network as a high-storage capacity CAM. The results of the VLSI circuit programmed with this new algorithm are promising for pattern recognition applications.
Using MaxCompiler for the high level synthesis of trigger algorithms
NASA Astrophysics Data System (ADS)
Summers, S.; Rose, A.; Sanders, P.
2017-02-01
Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.
Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric
2013-06-01
Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph--a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer's disease (AD) and reveal findings that could lead to advancements in AD research.
Indirect defense in a highly specific ant-plant mutualism.
Grangier, Julien; Dejean, Alain; Malé, Pierre-Jean G; Orivel, Jérôme
2008-10-01
Although associations between myrmecophytes and their plant ants are recognized as a particularly effective form of protective mutualism, their functioning remains incompletely understood. This field study examined the ant-plant Hirtella physophora and its obligate ant associate Allomerus decemarticulatus. We formulated two hypotheses on the highly specific nature of this association: (1) Ant presence should be correlated with a marked reduction in the amount of herbivory on the plant foliage; (2) ant activity should be consistent with the "optimal defense" theory predicting that the most vulnerable and valuable parts of the plant are the best defended. We validated the first hypothesis by demonstrating that for ant-excluded plants, expanding leaves, but also newly matured ones in the long term, suffered significantly more herbivore damage than ant-inhabited plants. We showed that A. decemarticulatus workers represent both constitutive and inducible defenses for their host, by patrolling its foliage and rapidly recruiting nestmates to foliar wounds. On examining how these activities change according to the leaves' developmental stage, we found that the number of patrolling ants dramatically decreased as the leaves matured, while leaf wounds induced ant recruitment regardless of the leaf's age. The resulting level of these indirect defenses was roughly proportional to leaf vulnerability and value during its development, thus validating our second hypothesis predicting optimal protection. This led us to discuss the factors influencing ant activity on the plant's surface. Our study emphasizes the importance of studying both the constitutive and inducible components of indirect defense when evaluating its efficacy and optimality.
Bottomside Ionospheric Electron Density Specification using Passive High Frequency Signals
NASA Astrophysics Data System (ADS)
Kaeppler, S. R.; Cosgrove, R. B.; Mackay, C.; Varney, R. H.; Kendall, E. A.; Nicolls, M. J.
2016-12-01
The vertical bottomside electron density profile is influenced by a variety of natural sources, most especially traveling ionospheric disturbances (TIDs). These disturbances cause plasma to be moved up or down along the local geomagnetic field and can strongly impact the propagation of high frequency radio waves. While the basic physics of these perturbations has been well studied, practical bottomside models are not well developed. We present initial results from an assimilative bottomside ionosphere model. This model uses empirical orthogonal functions based on the International Reference Ionosphere (IRI) to develop a vertical electron density profile, and features a builtin HF ray tracing function. This parameterized model is then perturbed to model electron density perturbations associated with TIDs or ionospheric gradients. Using the ray tracing feature, the model assimilates angle of arrival measurements from passive HF transmitters. We demonstrate the effectiveness of the model using angle of arrival data. Modeling results of bottomside electron density specification are compared against suitable ancillary observations to quantify accuracy of our model.
Specification and analysis of a high speed transport protocol
NASA Astrophysics Data System (ADS)
Tipici, Huseyin A.
1993-06-01
While networks have been getting faster, perceived throughput at the application has not always increased accordingly and the bottleneck has moved to the communications processing part of the system. The issues that cause the performance bottlenecks in the current transport protocols are discussed in this thesis, and a further study on a high speed transport protocol which tries to overcome these difficulties with some unique features is presented. By using the Systems of Communicating Machines (SCM) model as a framework, a refined and improved version of the formal protocol specification is built over the previous work, and it is analyzed to verify that the protocol is free from logical errors such as deadlock, unspecified reception, unexecuted transitions and blocking loops. The analysis is conducted in two phases which consists of the application of the associated system state analysis and the simulation of the protocol using the programming language ADA. The thesis also presents the difficulties encountered during the course of the analysis, and suggests possible solutions to some of the problems.
High Performance Organ-Specific Nuclear Medicine Imagers.
NASA Astrophysics Data System (ADS)
Majewski, Stan
2006-04-01
One of the exciting applications of nuclear science is nuclear medicine. Well-known diagnostic imaging tools such as PET and SPECT (as well as MRI) were developed as spin-offs of basic scientific research in atomic and nuclear physics. Development of modern instrumentation for applications in particle physics experiments offers an opportunity to contribute to development of improved nuclear medicine (gamma and positron) imagers, complementing the present set of standard imaging tools (PET, SPECT, MRI, ultrasound, fMRI, MEG, etc). Several examples of new high performance imagers developed in national laboratories in collaboration with academia will be given to demonstrate this spin-off activity. These imagers are designed to specifically image organs such as breast, heart, head (brain), or prostate. The remaining and potentially most important challenging application field for dedicated nuclear medicine imagers is to assist with cancer radiation treatments. Better control of radiation dose delivery requires development of new compact in-situ imagers becoming integral parts of the radiation delivery systems using either external beams or based on radiation delivery by inserting or injecting radioactive sources (gamma, beta or alpha emitters) into tumors.
Mechanism of substrate selection by a highly specific CRISPR endoribonuclease.
Sternberg, Samuel H; Haurwitz, Rachel E; Doudna, Jennifer A
2012-04-01
Bacteria and archaea possess adaptive immune systems that rely on small RNAs for defense against invasive genetic elements. CRISPR (clustered regularly interspaced short palindromic repeats) genomic loci are transcribed as long precursor RNAs, which must be enzymatically cleaved to generate mature CRISPR-derived RNAs (crRNAs) that serve as guides for foreign nucleic acid targeting and degradation. This processing occurs within the repetitive sequence and is catalyzed by a dedicated Cas6 family member in many CRISPR systems. In Pseudomonas aeruginosa, crRNA biogenesis requires the endoribonuclease Csy4 (Cas6f), which binds and cleaves at the 3' side of a stable RNA stem-loop structure encoded by the CRISPR repeat. We show here that Csy4 recognizes its RNA substrate with an ~50 pM equilibrium dissociation constant, making it one of the highest-affinity protein:RNA interactions of this size reported to date. Tight binding is mediated exclusively by interactions upstream of the scissile phosphate that allow Csy4 to remain bound to its product and thereby sequester the crRNA for downstream targeting. Substrate specificity is achieved by RNA major groove contacts that are highly sensitive to helical geometry, as well as a strict preference for guanosine adjacent to the scissile phosphate in the active site. Collectively, our data highlight diverse modes of substrate recognition employed by Csy4 to enable accurate selection of CRISPR transcripts while avoiding spurious, off-target RNA binding and cleavage.
High-dynamic range tone-mapping algorithm for focal plane processors
NASA Astrophysics Data System (ADS)
Vargas-Sierra, S.; Liñán-Cembrano, G.; Roca, E.; Rodríguez-Vazquez, A.
2011-05-01
This paper presents a Dynamic Range improvement technique which is specially well-suited to be implemented in Focal Plane Processors (FPP) due to its very limited computing requirements since only local memories, little digital control and a comparator are required at the pixel level. The presented algorithm employs measurements during exposure time to create a 4-bit non-linear image whose histogram determines the shape of the tone-mapping curve which is applied to create the final image. Simulations results over a highly bimodal 120dB image are presented showing that both the highly and poorly illuminated parts of the image keep a sufficient level of details.
A High-Order Statistical Tensor Based Algorithm for Anomaly Detection in Hyperspectral Imagery
Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao
2014-01-01
Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm. PMID:25366706
A high-order statistical tensor based algorithm for anomaly detection in hyperspectral imagery.
Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao
2014-11-04
Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm.
Enhanced high dynamic range 3D shape measurement based on generalized phase-shifting algorithm
NASA Astrophysics Data System (ADS)
Wang, Minmin; Du, Guangliang; Zhou, Canlin; Zhang, Chaorui; Si, Shuchun; Li, Hui; Lei, Zhenkun; Li, YanJie
2017-02-01
Measuring objects with large reflectivity variations across their surface is one of the open challenges in phase measurement profilometry (PMP). Saturated or dark pixels in the deformed fringe patterns captured by the camera will lead to phase fluctuations and errors. Jiang et al. proposed a high dynamic range real-time three-dimensional (3D) shape measurement method (Jiang et al., 2016) [17] that does not require changing camera exposures. Three inverted phase-shifted fringe patterns are used to complement three regular phase-shifted fringe patterns for phase retrieval whenever any of the regular fringe patterns are saturated. Nonetheless, Jiang's method has some drawbacks: (1) the phases of saturated pixels are estimated by different formulas on a case by case basis; in other words, the method lacks a universal formula; (2) it cannot be extended to the four-step phase-shifting algorithm, because inverted fringe patterns are the repetition of regular fringe patterns; (3) for every pixel in the fringe patterns, only three unsaturated intensity values can be chosen for phase demodulation, leaving the other unsaturated ones idle. We propose a method to enhance high dynamic range 3D shape measurement based on a generalized phase-shifting algorithm, which combines the complementary techniques of inverted and regular fringe patterns with a generalized phase-shifting algorithm. Firstly, two sets of complementary phase-shifted fringe patterns, namely the regular and the inverted fringe patterns, are projected and collected. Then, all unsaturated intensity values at the same camera pixel from two sets of fringe patterns are selected and employed to retrieve the phase using a generalized phase-shifting algorithm. Finally, simulations and experiments are conducted to prove the validity of the proposed method. The results are analyzed and compared with those of Jiang's method, demonstrating that our method not only expands the scope of Jiang's method, but also improves
High-resolution algorithms for the Navier-Stokes equations for generalized discretizations
NASA Astrophysics Data System (ADS)
Mitchell, Curtis Randall
Accurate finite volume solution algorithms for the two dimensional Navier Stokes equations and the three dimensional Euler equations for both structured and unstructured grid topologies are presented. Results for two dimensional quadrilateral and triangular elements and three dimensional tetrahedral elements will be provided. Fundamental to the solution algorithm is a technique for generating multidimensional polynomials which model the spatial variation of the flow variables. Cell averaged data is used to reconstruct pointwise distributions of the dependent variables. The reconstruction errors are evaluated on triangular meshes. The implementation of the algorithm is unique in that three reconstructions are performed for each cell face in the domain. Two of the reconstructions are used to evaluate the inviscid fluxes and correspond to the right and left interface states needed for the solution of a Riemann problem. The third reconstruction is used to evaluate the viscous fluxes. The gradient terms that appear in the viscous fluxes are formed by simply differentiating the polynomial. By selecting the appropriate cell control volumes, centered, upwind and upwind-biased stencils are possible. Numerical calculations in two dimensions include solutions to elliptic boundary value problems, Ringlebs' flow, an inviscid shock reflection, a flat plate boundary layer, and a shock induced separation over a flat plate. Three dimensional results include the ONERA M6 wing. All of the unstructured grids were generated using an advancing front mesh generation procedure. Modifications to the three dimensional grid generator were necessary to discretize the surface grids for bodies with high curvature. In addition, mesh refinement algorithms were implemented to improve the surface grid integrity. Examples include a Glasair fuselage, High Speed Civil Transport, and the ONERA M6 wing. The role of reconstruction as applied to adaptive remeshing is discussed and a new first order error
New figure-fracturing algorithm for high-quality variable-shaped e-beam exposure data generation
NASA Astrophysics Data System (ADS)
Nakao, Hiroomi; Moriizumi, Koichi; Kamiyama, Kinya; Terai, Masayuki; Miwa, Hisaharu
1996-07-01
We present a new figure fracturing algorithm that partitions each polygon in layout design data into trapezoids for vriab1eshaped EB exposure data generation. In order to improve the dimension accuracy of fabricated mask patterns created using the figure fracturing result, our algorithm has two new effective functions, one for suppressing narrow figure generation and the other for suppressing critical part partition. Furthermore, using a new graph based approach, our algorithm efficiently chooses from all the possible partitioning lines an appropriate set of lines by which optimal figure fracturing is performed. The application results show that the algorithm produces high quality results in a reasonable processing time.
Tedgren, Åsa Carlsson; Carlsson, Gudrun Alm
2013-04-21
Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from (125)I, (169)Yb and (192)Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.
NASA Astrophysics Data System (ADS)
Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun
2013-04-01
Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.
A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations
NASA Astrophysics Data System (ADS)
Jayaram, V.; Crain, K.; Keller, G. R.
2011-12-01
We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element
Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken
2014-03-01
We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer
Liu, Xiaozheng; Yuan, Zhenming; Guo, Zhongwei; Xu, Dongrong
2015-05-01
Diffusion tensor imaging is widely used for studying neural fiber trajectories in white matter and for quantifying changes in tissue using diffusion properties at each voxel in the brain. To better model the nature of crossing fibers within complex architectures, rather than using a simplified tensor model that assumes only a single fiber direction at each image voxel, a model mixing multiple diffusion tensors is used to profile diffusion signals from high angular resolution diffusion imaging (HARDI) data. Based on the HARDI signal and a multiple tensors model, spherical deconvolution methods have been developed to overcome the limitations of the diffusion tensor model when resolving crossing fibers. The Richardson-Lucy algorithm is a popular spherical deconvolution method used in previous work. However, it is based on a Gaussian distribution, while HARDI data are always very noisy, and the distribution of HARDI data follows a Rician distribution. This current work aims to present a novel solution to address these issues. By simultaneously considering both the Rician bias and neighbor correlation in HARDI data, the authors propose a localized Richardson-Lucy (LRL) algorithm to estimate fiber orientations for HARDI data. The proposed method can simultaneously reduce noise and correct the Rician bias. Mean angular error (MAE) between the estimated Fiber orientation distribution (FOD) field and the reference FOD field was computed to examine whether the proposed LRL algorithm offered any advantage over the conventional RL algorithm at various levels of noise. Normalized mean squared error (NMSE) was also computed to measure the similarity between the true FOD field and the estimated FOD filed. For MAE comparisons, the proposed LRL approach obtained the best results in most of the cases at different levels of SNR and b-values. For NMSE comparisons, the proposed LRL approach obtained the best results in most of the cases at b-value = 3000 s/mm(2), which is the
Hwang, Hee Sang; Park, Chan-Sik; Yoon, Dok Hyun; Suh, Cheolwon; Huh, Jooryung
2014-08-01
Diffuse large B-cell lymphoma (DLBCL) is classified into prognostically distinct germinal center B-cell (GCB) and activated B-cell subtypes by gene expression profiling (GEP). Recent reports suggest the role of GEP subtypes in targeted therapy. Immunohistochemistry (IHC) algorithms have been proposed as surrogates of GEP, but their utility remains controversial. Using microarray, we examined the concordance of 4 GEP-correlated and 2 non-GEP-correlated IHC algorithms in 381 DLBCLs, not otherwise specified. Subtypes and variants of DLBCL were excluded to minimize the possible confounding effect on prognosis and phenotype. Survival was analyzed in 138 cyclophosphamide, adriamycin, vincristine, and prednisone (CHOP)-treated and 147 rituximab plus CHOP (R-CHOP)-treated patients. Of the GEP-correlated algorithms, high concordance was observed among Hans, Choi, and Visco-Young algorithms (total concordance, 87.1%; κ score: 0.726 to 0.889), whereas Tally algorithm exhibited slightly lower concordance (total concordance 77.4%; κ score: 0.502 to 0.643). Two non-GEP-correlated algorithms (Muris and Nyman) exhibited poor concordance. Compared with the Western data, incidence of the non-GCB subtype was higher in all algorithms. Univariate analysis showed prognostic significance for Hans, Choi, and Visco-Young algorithms and BCL6, GCET1, LMO2, and BCL2 in CHOP-treated patients. On multivariate analysis, Hans algorithm retained its prognostic significance. By contrast, neither the algorithms nor individual antigens predicted survival in R-CHOP treatment. The high concordance among GEP-correlated algorithms suggests their usefulness as reliable discriminators of molecular subtype in DLBCL, not otherwise specified. Our study also indicates that prognostic significance of IHC algorithms may be limited in R-CHOP-treated Asian patients because of the predominance of the non-GCB type.
Trajectory Specification for High-Capacity Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.
2004-01-01
In the current air traffic management system, the fundamental limitation on airspace capacity is the cognitive ability of human air traffic controllers to maintain safe separation with high reliability. The doubling or tripling of airspace capacity that will be needed over the next couple of decades will require that tactical separation be at least partially automated. Standardized conflict-free four-dimensional trajectory assignment will be needed to accomplish that objective. A trajectory specification format based on the Extensible Markup Language is proposed for that purpose. This format can be used to downlink a trajectory request, which can then be checked on the ground for conflicts and approved or modified, if necessary, then uplinked as the assigned trajectory. The horizontal path is specified as a series of geodetic waypoints connected by great circles, and the great-circle segments are connected by turns of specified radius. Vertical profiles for climb and descent are specified as low-order polynomial functions of along-track position, which is itself specified as a function of time. Flight technical error tolerances in the along-track, cross-track, and vertical axes define a bounding space around the reference trajectory, and conformance will guarantee the required separation for a period of time known as the conflict time horizon. An important safety benefit of this regimen is that the traffic will be able to fly free of conflicts for at least several minutes even if all ground systems and the entire communication infrastructure fail. Periodic updates in the along-track axis will adjust for errors in the predicted along-track winds.
Indirect defense in a highly specific ant-plant mutualism
NASA Astrophysics Data System (ADS)
Grangier, Julien; Dejean, Alain; Malé, Pierre-Jean G.; Orivel, Jérôme
2008-10-01
Although associations between myrmecophytes and their plant ants are recognized as a particularly effective form of protective mutualism, their functioning remains incompletely understood. This field study examined the ant-plant Hirtella physophora and its obligate ant associate Allomerus decemarticulatus. We formulated two hypotheses on the highly specific nature of this association: (1) Ant presence should be correlated with a marked reduction in the amount of herbivory on the plant foliage; (2) ant activity should be consistent with the "optimal defense" theory predicting that the most vulnerable and valuable parts of the plant are the best defended. We validated the first hypothesis by demonstrating that for ant-excluded plants, expanding leaves, but also newly matured ones in the long term, suffered significantly more herbivore damage than ant-inhabited plants. We showed that A. decemarticulatus workers represent both constitutive and inducible defenses for their host, by patrolling its foliage and rapidly recruiting nestmates to foliar wounds. On examining how these activities change according to the leaves’ developmental stage, we found that the number of patrolling ants dramatically decreased as the leaves matured, while leaf wounds induced ant recruitment regardless of the leaf’s age. The resulting level of these indirect defenses was roughly proportional to leaf vulnerability and value during its development, thus validating our second hypothesis predicting optimal protection. This led us to discuss the factors influencing ant activity on the plant’s surface. Our study emphasizes the importance of studying both the constitutive and inducible components of indirect defense when evaluating its efficacy and optimality.
Improving the specificity of high-throughput ortholog prediction
Fulton, Debra L; Li, Yvonne Y; Laird, Matthew R; Horsman, Benjamin GS; Roche, Fiona M; Brinkman, Fiona SL
2006-01-01
Ortholuge software that may be used to characterize other species' datasets, are available at (software under GNU General Public License). Conclusion The Ortholuge method reported here appears to significantly improve the specificity (precision) of high-throughput ortholog prediction for both bacterial and eukaryotic species. This method, and its associated software, will aid those performing various comparative genomics-based analyses, such as the prediction of conserved regulatory elements upstream of orthologous genes. PMID:16729895
NASA Astrophysics Data System (ADS)
Liu, Yongzhi; Geng, Tie; (Tom Turng, Lih-Sheng; Liu, Chuntai; Cao, Wei; Shen, Changyu
2017-09-01
In the multiscale numerical simulation of polymer crystallization during the processing period, flow and temperature of the polymer melt are simulated on the macroscale level, while nucleation and growth of the spherulite are simulated at the mesoscale level. As a part of the multiscale simulation, the meso-simulation requires a fast solving speed because the meso-simulation software must be run several times in every macro-element at each macro-step. Meanwhile, the accuracy of the calculation results is also very important. It is known that the simulation geometry of crystallization includes planar (2D) and three-dimensional space (3D). The 3D calculations are more accurate but more expensive because of the long CPU time consumed. On the contrary, 2D calculations are always much faster but lower in accuracy. To reach the desirable speed and high accuracy at the same time, an algorithm is presented, in which the Delesse law coupled with the Monte Carlo method and pixel method are employed to simulate the nucleation, growth, and impingement of the polymer spherulite at the mesoscale level. Based on this algorithm, a software is developed with the Visual C++ language, and its numerical examples’ results prove that the solving speed of this algorithm is as fast as the 2D classical simulation and the calculation accuracy is at the same level as the 3D simulation.
Explicit high-order noncanonical symplectic algorithms for ideal two-fluid systems
NASA Astrophysics Data System (ADS)
Xiao, Jianyuan; Qin, Hong; Morrison, Philip J.; Liu, Jian; Yu, Zhi; Zhang, Ruili; He, Yang
2016-11-01
An explicit high-order noncanonical symplectic algorithm for ideal two-fluid systems is developed. The fluid is discretized as particles in the Lagrangian description, while the electromagnetic fields and internal energy are treated as discrete differential form fields on a fixed mesh. With the assistance of Whitney interpolating forms [H. Whitney, Geometric Integration Theory (Princeton University Press, 1957); M. Desbrun et al., Discrete Differential Geometry (Springer, 2008); J. Xiao et al., Phys. Plasmas 22, 112504 (2015)], this scheme preserves the gauge symmetry of the electromagnetic field, and the pressure field is naturally derived from the discrete internal energy. The whole system is solved using the Hamiltonian splitting method discovered by He et al. [Phys. Plasmas 22, 124503 (2015)], which was been successfully adopted in constructing symplectic particle-in-cell schemes [J. Xiao et al., Phys. Plasmas 22, 112504 (2015)]. Because of its structure preserving and explicit nature, this algorithm is especially suitable for large-scale simulations for physics problems that are multi-scale and require long-term fidelity and accuracy. The algorithm is verified via two tests: studies of the dispersion relation of waves in a two-fluid plasma system and the oscillating two-stream instability.
Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S
2017-01-01
Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.
MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm based on ICPF
NASA Astrophysics Data System (ADS)
Liu, Yang; Xu, Shiyou; Chen, Zengping; Yuan, Bin
2014-12-01
In this paper, we present a detailed analysis on the performance degradation of inverse synthetic aperture radar (ISAR) imagery with the polar format algorithm (PFA) due to the inaccurate rotation center. And a novel algorithm is developed to estimate the rotation center for ISAR targets to overcome the degradation. In real ISAR scenarios, the real rotation center shift is usually not coincided with the gravity center of the high-resolution range profile (HRRP), due to the data-driven translational motion compensation. Because of the imprecise information of rotation center, PFA image yields model errors and severe blurring in the cross-range direction. To tackle this problem, an improved PFA based on integrated cubic phase function (ICPF) is proposed. In the method, the rotation center in the slant range is estimated firstly by ICPF, and the signal is shifted accordingly. Finally, the standard PFA algorithm can be carried out straightforwardly. With the proposed method, wide-angle ISAR imagery of non-cooperative targets can be achieved by PFA with improved focus quality. Simulation and real-data experiments confirm the effectiveness of the proposal.
NASA Astrophysics Data System (ADS)
Zheng, Huanhuan; Xu, Zhaowen; Yu, Changyuan; Gurusamy, Mohan
2017-08-01
A novel indoor positioning system (IPS) with high positioning precision, based on visible light communication (VLC), is proposed and demonstrated with the dimensions of 100 cm×118.5 cm×128.7 cm. The average positioning distance error is 1.72 cm using the original 2-D positioning algorithm. However, at the corners of the test-bed, the positioning errors are relatively larger than other places. Thus, an error correcting algorithm (ECA) is applied at the corners in order to improve the positioning accuracy. The average positioning errors of four corners decrease from 3.67 cm to 1.55 cm. Then, a 3-D positioning algorithm is developed and the average positioning error of 1.90 cm in space is achieved. Four altitude levels are chosen and on each receiver plane with different heights, four points are picked up to test the positioning error. The average positioning errors in 3-D space are all within 3 cm on these four levels and the performance on each level is similar. A random track is also drawn to show that in 3-D space, the positioning error of random point is within 3 cm.
Gálvez, Sergio; Ferusic, Adis; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan A; Dorado, Gabriel
2016-10-01
The Smith-Waterman algorithm has a great sensitivity when used for biological sequence-database searches, but at the expense of high computing-power requirements. To overcome this problem, there are implementations in literature that exploit the different hardware-architectures available in a standard PC, such as GPU, CPU, and coprocessors. We introduce an application that splits the original database-search problem into smaller parts, resolves each of them by executing the most efficient implementations of the Smith-Waterman algorithms in different hardware architectures, and finally unifies the generated results. Using non-overlapping hardware allows simultaneous execution, and up to 2.58-fold performance gain, when compared with any other algorithm to search sequence databases. Even the performance of the popular BLAST heuristic is exceeded in 78% of the tests. The application has been tested with standard hardware: Intel i7-4820K CPU, Intel Xeon Phi 31S1P coprocessors, and nVidia GeForce GTX 960 graphics cards. An important increase in performance has been obtained in a wide range of situations, effectively exploiting the available hardware.
Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.
Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W
2016-03-31
Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis.
Spectral deblurring: an algorithm for high-resolution, hybrid spectral CT
NASA Astrophysics Data System (ADS)
Clark, D. P.; Badea, C. T.
2015-03-01
We are developing a hybrid, dual-source micro-CT system based on the combined use of an energy integrating (EID) x-ray detector and a photon counting x-ray detector (PCXD). Due to their superior spectral resolving power, PCXDs have the potential to reduce radiation dose and to enable functional and molecular imaging with CT. In most current PCXDs, however, spatial resolution and field of view are limited by hardware development and charge sharing effects. To address these problems, we propose spectral deblurring—a relatively simple algorithm for increasing the spatial resolution of hybrid, spectral CT data. At the heart of the algorithm is the assumption that the underlying CT data is piecewise constant, enabling robust recovery in the presence of noise and spatial blur by enforcing gradient sparsity. After describing the proposed algorithm, we summarize simulation experiments which assess the trade-offs between spatial resolution, contrast, and material decomposition accuracy given realistic levels of noise. When the spatial resolution between imaging chains has a ratio of 5:1, spectral deblurring results in a 52% increase in the material decomposition accuracy of iodine, gadolinium, barium, and water vs. linear interpolation. For a ratio of 10:1, a realistic representation of our hybrid imaging system, a 52% improvement was also seen. Overall, we conclude that the performance breaks down around high frequency and low contrast structures. Following the simulation experiments, we apply the algorithm to ex vivo data acquired in a mouse injected with an iodinated contrast agent and surrounded by vials of iodine, gadolinium, barium, and water.
Smith, Edward M; Littrell, Jack; Olivier, Michael
2007-12-01
High-throughput SNP genotyping platforms use automated genotype calling algorithms to assign genotypes. While these algorithms work efficiently for individual platforms, they are not compatible with other platforms, and have individual biases that result in missed genotype calls. Here we present data on the use of a second complementary SNP genotype clustering algorithm. The algorithm was originally designed for individual fluorescent SNP genotyping assays, and has been optimized to permit the clustering of large datasets generated from custom-designed Affymetrix SNP panels. In an analysis of data from a 3K array genotyped on 1,560 samples, the additional analysis increased the overall number of genotypes by over 45,000, significantly improving the completeness of the experimental data. This analysis suggests that the use of multiple genotype calling algorithms may be advisable in high-throughput SNP genotyping experiments. The software is written in Perl and is available from the corresponding author.
Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy
Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc; Létourneau, Mélanie; Fenster, Aaron; Pouliot, Jean
2013-11-15
Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then be generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the
Anisakis simplex recombinant allergens increase diagnosis specificity preserving high sensitivity.
Caballero, María Luisa; Umpierrez, Ana; Perez-Piñar, Teresa; Moneo, Ignacio; de Burgos, Carmen; Asturias, Juan A; Rodríguez-Pérez, Rosa
2012-01-01
So far, the frequency of Anisakis simplex-specific IgE antibodies has been determined by skin prick tests (SPTs) and the ImmunoCAP system. These commercial methods have good sensitivity, but their specificity is poor because they use complete parasite extracts. Our aim was to determine the frequency of sensitization to A. simplex using recombinant Ani s 1, Ani s 3, Ani s 5, Ani s 9 and Ani s 10 and to evaluate these allergens for diagnosis, comparing their performance with the commercial methods. We conducted a descriptive, cross-sectional validation study performed in an allergy outpatient hospital clinic. Patients without fish-related allergy (tolerant patients, n = 99), and A. simplex-allergic patients (n = 35) were studied by SPTs, ImmunoCAP assays and detection of specific IgE to A. simplex recombinant allergens by dot blotting. SPTs and ImmunoCAP assays were positive in 18 and 17% of tolerant patients, respectively. All A. simplex-allergic patients had positive SPTs and ImmunoCAP assays. Specific IgE against at least one of the A. simplex recombinant allergens tested was detected in 15% of sera from tolerant patients and in 100% of sera from A. simplex-allergic patients. Detection of at least one A. simplex recombinant allergen by dot blotting and ImmunoCAP assay using complete extract showed a diagnostic sensitivity of 100% with both methods. However, the specificity of dot blotting with A. simplex recombinant allergens was higher compared with ImmunoCAP (84.85 vs. 82.83%). There are 15% of tolerant patients with specific IgE against important A. simplex allergens. The recombinant allergens studied here increase the specificity of A. simplex diagnosis while keeping the highest sensitivity. A. simplex recombinant allergens should be included with A. simplex allergy diagnostic tests to improve their specificity. Copyright © 2012 S. Karger AG, Basel.
An Efficient Algorithm for Some Highly Nonlinear Fractional PDEs in Mathematical Physics
Ahmad, Jamshad; Mohyud-Din, Syed Tauseef
2014-01-01
In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature. PMID:25525804
An efficient algorithm for some highly nonlinear fractional PDEs in mathematical physics.
Ahmad, Jamshad; Mohyud-Din, Syed Tauseef
2014-01-01
In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature.
NASA Astrophysics Data System (ADS)
Shinbori, Eiji; Takagi, Mikio
1992-11-01
A new image magnification method, called 'IM-GPDCT' (image magnification applying the Gerchberg-Papoulis (GP) iterative algorithm with discrete cosine transform (DCT)), is described and its performance evaluated. This method markedly improves image quality of a magnified image using a concept which restores the spatial high frequencies which are conventionally lost due to use of a low pass filter. These frequencies are restored using two known constraints applied during iterative DCT: (1) correct information in a passband is known and (2) the spatial extent of an image is finite. Simulation results show that the IM- GPDCT outperforms three conventional interpolation methods from both a restoration error and image quality standpoint.
Niche harmony search algorithm for detecting complex disease associated high-order SNP combinations.
Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; He, Zongzhen; Liu, Yajun; Liu, Zhaowen
2017-09-14
Genome-wide association study is especially challenging in detecting high-order disease-causing models due to model diversity, possible low or even no marginal effect of the model, and extraordinary search and computations. In this paper, we propose a niche harmony search algorithm where joint entropy is utilized as a heuristic factor to guide the search for low or no marginal effect model, and two computationally lightweight scores are selected to evaluate and adapt to diverse of disease models. In order to obtain all possible suspected pathogenic models, niche technique merges with HS, which serves as a taboo region to avoid HS trapping into local search. From the resultant set of candidate SNP-combinations, we use G-test statistic for testing true positives. Experiments were performed on twenty typical simulation datasets in which 12 models are with marginal effect and eight ones are with no marginal effect. Our results indicate that the proposed algorithm has very high detection power for searching suspected disease models in the first stage and it is superior to some typical existing approaches in both detection power and CPU runtime for all these datasets. Application to age-related macular degeneration (AMD) demonstrates our method is promising in detecting high-order disease-causing models.
Finsterle, S.; Kowalsky, M.B.
2010-10-15
We propose a modification to the Levenberg-Marquardt minimization algorithm for a more robust and more efficient calibration of highly parameterized, strongly nonlinear models of multiphase flow through porous media. The new method combines the advantages of truncated singular value decomposition with those of the classical Levenberg-Marquardt algorithm, thus enabling a more robust solution of underdetermined inverse problems with complex relations between the parameters to be estimated and the observable state variables used for calibration. The truncation limit separating the solution space from the calibration null space is re-evaluated during the iterative calibration process. In between these re-evaluations, fewer forward simulations are required, compared to the standard approach, to calculate the approximate sensitivity matrix. Truncated singular values are used to calculate the Levenberg-Marquardt parameter updates, ensuring that safe small steps along the steepest-descent direction are taken for highly correlated parameters of low sensitivity, whereas efficient quasi-Gauss-Newton steps are taken for independent parameters with high impact. The performance of the proposed scheme is demonstrated for a synthetic data set representing infiltration into a partially saturated, heterogeneous soil, where hydrogeological, petrophysical, and geostatistical parameters are estimated based on the joint inversion of hydrological and geophysical data.
Surface contribution to high-order aberrations using the Aldis therem and Andersen's algorithms
NASA Astrophysics Data System (ADS)
Ortiz-Estardante, A.; Cornejo-Rodriguez, Alejandro
1990-07-01
Formulae and computer programs were developed for surface contributions to high order aberrations coefficients using the Aldis theorem and Andersen algor ithms for a symmetr ical optical system. 2. THEORY Using the algorithms developed by T. B. Andersent which allow to calculate the high order aberrations coefficients of an optical system. We were able to obtain a set of equations for the contributions of each surface of a centered optical system to such aberration coefficiets by using the equations of Andersen and the so called Aldis theorem 3. COMPUTER PROGRAMS AND EXAMPLES. The study for the case of an object at infinite has been completed and more recently the object to finite distance case has been also finished . The equations have been properly programed for the two above mentioned situations . Some typical designs of optical systems will be presented and some advantages and disadvantages of the developed formulae and method will be discussed. 4. CONCLUSIONS The algorithm developed by Anderson has a compact notation and structure which is suitable for computers. Using those results obtained by Anderson together with the Aldis theorem a set of equations were derived and programmed for the surface contributions of a centered optical system to high order aberrations. 5. REFERENCES 1. T . B. Andersen App 1. Opt. 3800 (1980) 2. A. Cox A system of Optical Design Focal Press 1964 18 / SPIE
Algorithms for Low-Cost High Accuracy Geomagnetic Measurements in LEO
NASA Astrophysics Data System (ADS)
Beach, T. L.; Zesta, E.; Allen, L.; Chepko, A.; Bonalsky, T.; Wendel, D. E.; Clavier, O.
2013-12-01
Geomagnetic field measurements are a fundamental, key parameter measurement for any space weather application, particularly for tracking the electromagnetic energy input in the Ionosphere-Thermosphere system and for high latitude dynamics governed by the large-scale field-aligned currents. The full characterization of the Magnetosphere-Ionosphere-Thermosphere coupled system necessitates measurements with higher spatial/temporal resolution and from multiple locations simultaneously. This becomes extremely challenging in the current state of shrinking budgets. Traditionally, including a science-grade magnetometer in a mission necessitates very costly integration and design (sensor on long boom) and imposes magnetic cleanliness restrictions on all components of the bus and payload. This work presents an innovative algorithm approach that enables high quality magnetic field measurements by one or more high-quality magnetometers mounted on the spacecraft without booms. The algorithm estimates the background field using multiple magnetometers and current telemetry on board a spacecraft. Results of a hardware-in-the-loop simulation showed an order of magnitude reduction in the magnetic effects of spacecraft onboard time-varying currents--from 300 nT to an average residual of 15 nT.
NASA Technical Reports Server (NTRS)
Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.
1989-01-01
The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.
Development and Characterization of High-Efficiency, High-Specific Impulse Xenon Hall Thrusters
NASA Technical Reports Server (NTRS)
Hofer, Richard R.; Jacobson, David (Technical Monitor)
2004-01-01
This dissertation presents research aimed at extending the efficient operation of 1600 s specific impulse Hall thruster technology to the 2000 to 3000 s range. Motivated by previous industry efforts and mission studies, the aim of this research was to develop and characterize xenon Hall thrusters capable of both high-specific impulse and high-efficiency operation. During the development phase, the laboratory-model NASA 173M Hall thrusters were designed and their performance and plasma characteristics were evaluated. Experiments with the NASA-173M version 1 (v1) validated the plasma lens magnetic field design. Experiments with the NASA 173M version 2 (v2) showed there was a minimum current density and optimum magnetic field topography at which efficiency monotonically increased with voltage. Comparison of the thrusters showed that efficiency can be optimized for specific impulse by varying the plasma lens. During the characterization phase, additional plasma properties of the NASA 173Mv2 were measured and a performance model was derived. Results from the model and experimental data showed how efficient operation at high-specific impulse was enabled through regulation of the electron current with the magnetic field. The electron Hall parameter was approximately constant with voltage, which confirmed efficient operation can be realized only over a limited range of Hall parameters.
Numerical algorithms for highly oscillatory dynamic system based on commutator-free method
NASA Astrophysics Data System (ADS)
Li, Wencheng; Deng, Zichen; Zhang, Suying
2007-04-01
In the present paper, an efficiently improved modified Magnus integrator algorithm based on commutator-free method is proposed for the second-order dynamic systems with time-dependent high frequencies. Firstly, the second-order dynamic systems are transferred to the frame of reference by introducing new variable so that highly oscillatory behaviour inherited from the entries. Then the modified Magnus integrator method based on local linearization is appropriately designed for solving the above new form. And some optimized strategies for reducing the number of function evaluations and matrix operations are also suggested. Finally, several numerical examples for highly oscillatory dynamic systems, such as Airy equation, Bessel equation, Mathieu equation, are presented to demonstrate the validity and effectiveness of the proposed method.
High Spectral Resolution MODIS Algorithms for Ocean Chlorophyll in Case II Waters
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
2004-01-01
The Case 2 chlorophyll a algorithm is based on a semi-analytical, bio-optical model of remote sensing reflectance, R(sub rs)(lambda), where R(sub rs)(lambda) is defined as the water-leaving radiance, L(sub w)(lambda), divided by the downwelling irradiance just above the sea surface, E(sub d)(lambda,0(+)). The R(sub rs)(lambda) model (Section 3) has two free variables, the absorption coefficient due to phytoplankton at 675 nm, a(sub phi)(675), and the absorption coefficient due to colored dissolved organic matter (CDOM) or gelbstoff at 400 nm, a(sub g)(400). The R(rs) model has several parameters that are fixed or can be specified based on the region and season of the MODIS scene. These control the spectral shapes of the optical constituents of the model. R(sub rs)(lambda(sub i)) values from the MODIS data processing system are placed into the model, the model is inverted, and a(sub phi)(675), a(sub g)(400) (MOD24), and chlorophyll a (MOD21, Chlor_a_3) are computed. Algorithm development is initially focused on tropical, subtropical, and summer temperate environments, and the model is parameterized in Section 4 for three different bio-optical domains: (1) high ratios of photoprotective pigments to chlorophyll and low self-shading, which for brevity, we designate as 'unpackaged'; (2) low ratios and high self-shading, which we designate as 'packaged'; and (3) a transitional or global-average type. These domains can be identified from space by comparing sea-surface temperature to nitrogen-depletion temperatures for each domain (Section 5). Algorithm errors of more than 45% are reduced to errors of less than 30% with this approach, with the greatest effect occurring at the eastern and polar boundaries of the basins. Section 6 provides an expansion of bio-optical domains into high-latitude waters. The 'fully packaged' pigment domain is introduced in this section along with a revised strategy for implementing these variable packaging domains. Chlor_a_3 values derived semi
Isotope specific resolution recovery image reconstruction in high resolution PET imaging
Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib
2014-05-15
Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution
TEA HF laser with a high specific radiation energy
NASA Astrophysics Data System (ADS)
Puchikin, A. V.; Andreev, M. V.; Losev, V. F.; Panchenko, Yu. N.
2017-01-01
Results of experimental studies of the chemical HF laser with a non-chain reaction are presented. The possibility of the total laser efficiency of 5 % is shown when a traditional C-to-C pumping circuit with the charging voltage of 20-24 kV is used. It is experimentally shown that the specific radiation output energy of 21 J/l is reached at the specific pump energy of 350 J/l in SF6/H2 = 14/1 mixture at the total pressure of 0.27 bar.
Defining and Evaluating Classification Algorithm for High-Dimensional Data Based on Latent Topics
Luo, Le; Li, Li
2014-01-01
Automatic text categorization is one of the key techniques in information retrieval and the data mining field. The classification is usually time-consuming when the training dataset is large and high-dimensional. Many methods have been proposed to solve this problem, but few can achieve satisfactory efficiency. In this paper, we present a method which combines the Latent Dirichlet Allocation (LDA) algorithm and the Support Vector Machine (SVM). LDA is first used to generate reduced dimensional representation of topics as feature in VSM. It is able to reduce features dramatically but keeps the necessary semantic information. The Support Vector Machine (SVM) is then employed to classify the data based on the generated features. We evaluate the algorithm on 20 Newsgroups and Reuters-21578 datasets, respectively. The experimental results show that the classification based on our proposed LDA+SVM model achieves high performance in terms of precision, recall and F1 measure. Further, it can achieve this within a much shorter time-frame. Our process improves greatly upon the previous work in this field and displays strong potential to achieve a streamlined classification process for a wide range of applications. PMID:24416136
An efficient realtime video compression algorithm with high feature preserving capability
NASA Astrophysics Data System (ADS)
Al-Jawad, Naseer; Ehlers, Johan; Jassim, Sabah
2006-05-01
Mobile Phones and other hand held devices are constrained in their memory and computational power, and yet new generations of theses devices provide access to the web-based services and are equipped with digital cameras that make them more attractive to users. These added capabilities are expected to help incorporate such devices into the global communication system. In order to take advantage of these capabilities, there are desperate need for highly efficient algorithms including real-time image and video processing and transmission. This paper is concerned with high quality video compression for constrained mobile devices. We attempt to tweak a wavelet-based feature-preserving image compression technique that we have developed recently, so as to make it suitable for implementation on mobile phones and PDA's. The earlier version of the compression algorithm exploits the statistical properties of the multi-resolution wavelet-transformed images. The main modification is based on the observation that in many cases the statistical parameters of wavelet subbands of adjacent video frames do not differ significantly. We shall investigate the possibility of re-using codebooks for a sequence of adjacent frames without having adverse effect on image quality if any. Such an approach results in significant bandwidth and processing-time savings. The performance of this scheme will be tested in comparison to other video compression methods. Such a scheme is expected to be of use in security applications such as transmission of biometric data for a server-based verification.
Algorithm research of high-precision optical interferometric phase demodulation based on FPGA
NASA Astrophysics Data System (ADS)
Zhi, Chunxiao; Sun, Jinghua
2012-11-01
Optical interferometric phase demodulation algorithm is provided based on the principle of phase generated carrier (PGC), which can realize the optical interference measurement of high-precision signal demodulation, applied to optical fiber displacement, vibration sensor. Modulated photoelectric detection signal is performanced by interval 8 frequency multiplication sampling. The samples calculate the phase modulation depth and phase error through a feedback loop to achieve optimum working point control. On the other hand the results of sampling calculate precision of numerical of the phase. The algorithm uses the addition and subtraction method instead of correlation filtering and other related complex calculation process of the traditional PGC digital demodulation, making full use of FPGA data processing with advantage of high speed and parallel; This method can give full play to the advantage of FPGA performance. Otherwise, the speed at the same time, FPGA can also ensure that the phase demodulation precision, wide dynamic range, and give full play to the advantage of completing the data access by single clock cycle.
NASA Astrophysics Data System (ADS)
Zhang, J. X.; Yang, J. H.; Reinartz, P.
2016-06-01
Pan-sharpening of very high resolution remotely sensed imagery need enhancing spatial details while preserving spectral characteristics, and adjusting the sharpened results to realize the different emphases between the two abilities. In order to meet the requirements, this paper is aimed at providing an innovative solution. The block-regression-based algorithm (BR), which was previously presented for fusion of SAR and optical imagery, is firstly applied to sharpen the very high resolution satellite imagery, and the important parameter for adjustment of fusion result, i.e., block size, is optimized according to the two experiments for Worldview-2 and QuickBird datasets in which the optimal block size is selected through the quantitative comparison of the fusion results of different block sizes. Compared to five fusion algorithms (i.e., PC, CN, AWT, Ehlers, BDF) in fusion effects by means of quantitative analysis, BR is reliable for different data sources and can maximize enhancement of spatial details at the expense of a minimum spectral distortion.
Defining and evaluating classification algorithm for high-dimensional data based on latent topics.
Luo, Le; Li, Li
2014-01-01
Automatic text categorization is one of the key techniques in information retrieval and the data mining field. The classification is usually time-consuming when the training dataset is large and high-dimensional. Many methods have been proposed to solve this problem, but few can achieve satisfactory efficiency. In this paper, we present a method which combines the Latent Dirichlet Allocation (LDA) algorithm and the Support Vector Machine (SVM). LDA is first used to generate reduced dimensional representation of topics as feature in VSM. It is able to reduce features dramatically but keeps the necessary semantic information. The Support Vector Machine (SVM) is then employed to classify the data based on the generated features. We evaluate the algorithm on 20 Newsgroups and Reuters-21578 datasets, respectively. The experimental results show that the classification based on our proposed LDA+SVM model achieves high performance in terms of precision, recall and F1 measure. Further, it can achieve this within a much shorter time-frame. Our process improves greatly upon the previous work in this field and displays strong potential to achieve a streamlined classification process for a wide range of applications.
Crystal Symmetry Algorithms in a High-Throughput Framework for Materials
NASA Astrophysics Data System (ADS)
Taylor, Richard
The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.
Specification of High Activity Gamma-Ray Sources.
ERIC Educational Resources Information Center
International Commission on Radiation Units and Measurements, Washington, DC.
The report is concerned with making recommendations for the specifications of gamma ray sources, which relate to the quantity of radioactive material and the radiation emitted. Primary consideration is given to sources in teletherapy and to a lesser extent those used in industrial radiography and in irradiation units used in industry and research.…
Specification of High Activity Gamma-Ray Sources.
ERIC Educational Resources Information Center
International Commission on Radiation Units and Measurements, Washington, DC.
The report is concerned with making recommendations for the specifications of gamma ray sources, which relate to the quantity of radioactive material and the radiation emitted. Primary consideration is given to sources in teletherapy and to a lesser extent those used in industrial radiography and in irradiation units used in industry and research.…
NASA Astrophysics Data System (ADS)
Wen, Xianfei; Yang, Haori
2015-06-01
A major challenge in utilizing spectroscopy techniques for nuclear safeguards is to perform high-resolution measurements at an ultra-high throughput rate. Traditionally, piled-up pulses are rejected to ensure good energy resolution. To improve throughput rate, high-pass filters are normally implemented to shorten pulses. However, this reduces signal-to-noise ratio and causes degradation in energy resolution. In this work, a pulse pile-up recovery algorithm based on template-matching was proved to be an effective approach to achieve high-throughput gamma ray spectroscopy. First, a discussion of the algorithm was given in detail. Second, the algorithm was then successfully utilized to process simulated piled-up pulses from a scintillator detector. Third, the algorithm was implemented to analyze high rate data from a NaI detector, a silicon drift detector and a HPGe detector. The promising results demonstrated the capability of this algorithm to achieve high-throughput rate without significant sacrifice in energy resolution. The performance of the template-matching algorithm was also compared with traditional shaping methods.
Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D
2006-09-19
Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.
Hochberg, Alan M; Hauben, Manfred; Pearson, Ronald K; O'Hara, Donald J; Reisinger, Stephanie J; Goldsmith, David I; Gould, A Lawrence; Madigan, David
2009-01-01
Pharmacovigilance data-mining algorithms (DMAs) are known to generate significant numbers of false-positive signals of disproportionate reporting (SDRs), using various standards to define the terms 'true positive' and 'false positive'. To construct a highly inclusive reference event database of reported adverse events for a limited set of drugs, and to utilize that database to evaluate three DMAs for their overall yield of scientifically supported adverse drug effects, with an emphasis on ascertaining false-positive rates as defined by matching to the database, and to assess the overlap among SDRs detected by various DMAs. A sample of 35 drugs approved by the US FDA between 2000 and 2004 was selected, including three drugs added to cover therapeutic categories not included in the original sample. We compiled a reference event database of adverse event information for these drugs from historical and current US prescribing information, from peer-reviewed literature covering 1999 through March 2006, from regulatory actions announced by the FDA and from adverse event listings in the British National Formulary. Every adverse event mentioned in these sources was entered into the database, even those with minimal evidence for causality. To provide some selectivity regarding causality, each entry was assigned a level of evidence based on the source of the information, using rules developed by the authors. Using the FDA adverse event reporting system data for 2002 through 2005, SDRs were identified for each drug using three DMAs: an urn-model based algorithm, the Gamma Poisson Shrinker (GPS) and proportional reporting ratio (PRR), using previously published signalling thresholds. The absolute number and fraction of SDRs matching the reference event database at each level of evidence was determined for each report source and the data-mining method. Overlap of the SDR lists among the various methods and report sources was tabulated as well. The GPS algorithm had the lowest
Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min
2015-11-03
Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min
2015-07-01
Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ling, Xiang; Zhang, Yu
2017-07-01
The motion performance of the high acceleration high precision air suspension platform is affected by the electromechanical characteristics of the drive system, the fluid characteristics of the air suspension guide rail and the load disturbance. The mathematical model of the system is established, and the controller performance is designed based on the optimal objective function value. The ideal controller performance is proposed as the evaluation benchmark. The performance of the system is evaluated by the ratio of the optimal objective value and the evaluation benchmark. Based on the established evaluation index, the influence of noise disturbance and estimation vector on the stability and robustness of the system is evaluated, and the system performance is further optimized.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.; Tikhomirov, Vasily V.
2015-08-01
Principal limitations of the standard THz-TDS method for the detection and identification are demonstrated under real conditions (at long distance of about 3.5 m and at a high relative humidity more than 50%) using neutral substances thick paper bag, paper napkins and chocolate. We show also that the THz-TDS method detects spectral features of dangerous substances even if the THz signals were measured in laboratory conditions (at distance 30-40 cm from the receiver and at a low relative humidity less than 2%); silicon-based semiconductors were used as the samples. However, the integral correlation criteria, based on SDA method, allows us to detect the absence of dangerous substances in the neutral substances. The discussed algorithm shows high probability of the substance identification and a reliability of realization in practice, especially for security applications and non-destructive testing.
Parallel and Grid-Based Data Mining - Algorithms, Models and Systems for High-Performance KDD
NASA Astrophysics Data System (ADS)
Congiusta, Antonio; Talia, Domenico; Trunfio, Paolo
Data Mining often is a computing intensive and time requiring process. For this reason, several Data Mining systems have been implemented on parallel computing platforms to achieve high performance in the analysis of large data sets. Moreover, when large data repositories are coupled with geographical distribution of data, users and systems, more sophisticated technologies are needed to implement high-performance distributed KDD systems. Since computational Grids emerged as privileged platforms for distributed computing, a growing number of Grid-based KDD systems has been proposed. In this chapter we first discuss different ways to exploit parallelism in the main Data Mining techniques and algorithms, then we discuss Grid-based KDD systems. Finally, we introduce the Knowledge Grid, an environment which makes use of standard Grid middleware to support the development of parallel and distributed knowledge discovery applications.
Cosmo-SkyMed Di Seconda Generazione Innovative Algorithms and High Performance SAR Data Processors
NASA Astrophysics Data System (ADS)
Mari, S.; Porfilio, M.; Valentini, G.; Serva, S.; Fiorentino, C. A. M.
2016-08-01
In the frame of COSMO-SkyMed di Seconda Generazione (CSG) programme, extensive research activities have been conducted on SAR data processing, with particular emphasis on high resolution processors, wide field products noise and coregistration algorithms.As regards the high resolution, it is essential to create a model for the management of all those elements that are usually considered as negligible but alter the target phase responses when it is "integrated" for several seconds. Concerning the SAR wide-field products noise removal, one of the major problems is the ability compensate all the phenomena that affect the received signal intensity. Research activities are aimed at developing adaptive- iterative techniques for the compensation of inaccuracies on the knowledge of radar antenna pointing, up to achieve compensation of the order of thousandths of degree. Moreover, several modifications of the image coregistration algortithm have been studied aimed at improving the performences and reduce the computational effort.
Toward an image compression algorithm for the high-resolution electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.
Highly accurate prediction of specific activity using deep learning.
Sheinfeld, Mati; Levinson, Samuel; Orion, Itzhak
2017-09-20
Building materials can contain elevated levels of naturally occurring radioactive materials (NORM), in particular Ra-226, Th-232 and K-40. Safety standards, such as IAEA Safety Standards Series No. GSR Part 3, dictate particular activities that must be fulfilled to ensure adequate safety. Traditional methods include spectral analysis of material samples measured by a HPGe detector then processed to calculate the specific activity of the NORM in Bq/Kg with 1.96 σ uncertainty. This paper describes a new method that pre-processes the raw spectrum then feeds the result into a set of pre-trained neural networks, thus generating the required specific radionuclide activity as well as the 1.96 σ uncertainty. Copyright © 2017 Elsevier Ltd. All rights reserved.
[Structural characteristics providing for high specificity of enteropeptidase].
Mikhaĭlova, A G; Rumsh, L D
1998-04-01
The effects of structural modification upon the specificity of enteropeptidase were studied. A variation in the unique specificity of the enzyme was shown to be the result of an autolysis caused by the enzyme's loss of calcium ions. The cleavage sites of the autolysis were determined. A truncated enzyme containing the C-terminal fragment of its heavy chain (466-800 residues) and the intact light chain were shown to be the products of autolysis. The kinetic parameters of the hydrolysis of trypsinogen, a recombinant protein, and a peptide substrate with both forms of enteropeptidase were determined. Conditions were found that can help regulate the transition of the native enzyme into the truncated form. A hypothesis was proposed concerning the autoactivational character of proenteropeptidase processing.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10^{1} to ~10^{2} in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.
Chen, Yukun; Carroll, Robert J; Hinz, Eugenia R McPeek; Shah, Anushi; Eyler, Anne E; Denny, Joshua C; Xu, Hua
2013-12-01
Generalizable, high-throughput phenotyping methods based on supervised machine learning (ML) algorithms could significantly accelerate the use of electronic health records data for clinical and translational research. However, they often require large numbers of annotated samples, which are costly and time-consuming to review. We investigated the use of active learning (AL) in ML-based phenotyping algorithms. We integrated an uncertainty sampling AL approach with support vector machines-based phenotyping algorithms and evaluated its performance using three annotated disease cohorts including rheumatoid arthritis (RA), colorectal cancer (CRC), and venous thromboembolism (VTE). We investigated performance using two types of feature sets: unrefined features, which contained at least all clinical concepts extracted from notes and billing codes; and a smaller set of refined features selected by domain experts. The performance of the AL was compared with a passive learning (PL) approach based on random sampling. Our evaluation showed that AL outperformed PL on three phenotyping tasks. When unrefined features were used in the RA and CRC tasks, AL reduced the number of annotated samples required to achieve an area under the curve (AUC) score of 0.95 by 68% and 23%, respectively. AL also achieved a reduction of 68% for VTE with an optimal AUC of 0.70 using refined features. As expected, refined features improved the performance of phenotyping classifiers and required fewer annotated samples. This study demonstrated that AL can be useful in ML-based phenotyping methods. Moreover, AL and feature engineering based on domain knowledge could be combined to develop efficient and generalizable phenotyping methods.
Chen, Yukun; Carroll, Robert J; Hinz, Eugenia R McPeek; Shah, Anushi; Eyler, Anne E; Denny, Joshua C; Xu, Hua
2013-01-01
Objectives Generalizable, high-throughput phenotyping methods based on supervised machine learning (ML) algorithms could significantly accelerate the use of electronic health records data for clinical and translational research. However, they often require large numbers of annotated samples, which are costly and time-consuming to review. We investigated the use of active learning (AL) in ML-based phenotyping algorithms. Methods We integrated an uncertainty sampling AL approach with support vector machines-based phenotyping algorithms and evaluated its performance using three annotated disease cohorts including rheumatoid arthritis (RA), colorectal cancer (CRC), and venous thromboembolism (VTE). We investigated performance using two types of feature sets: unrefined features, which contained at least all clinical concepts extracted from notes and billing codes; and a smaller set of refined features selected by domain experts. The performance of the AL was compared with a passive learning (PL) approach based on random sampling. Results Our evaluation showed that AL outperformed PL on three phenotyping tasks. When unrefined features were used in the RA and CRC tasks, AL reduced the number of annotated samples required to achieve an area under the curve (AUC) score of 0.95 by 68% and 23%, respectively. AL also achieved a reduction of 68% for VTE with an optimal AUC of 0.70 using refined features. As expected, refined features improved the performance of phenotyping classifiers and required fewer annotated samples. Conclusions This study demonstrated that AL can be useful in ML-based phenotyping methods. Moreover, AL and feature engineering based on domain knowledge could be combined to develop efficient and generalizable phenotyping methods. PMID:23851443
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10^{1} to ~10^{2} in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.
Coaxial plasma thrusters for high specific impulse propulsion
NASA Technical Reports Server (NTRS)
Schoenberg, Kurt F.; Gerwin, Richard A.; Barnes, Cris W.; Henins, Ivars; Mayo, Robert; Moses, Ronald, Jr.; Scarberry, Richard; Wurden, Glen
1991-01-01
A fundamental basis for coaxial plasma thruster performance is presented and the steady-state, ideal MHD properties of a coaxial thruster using an annular magnetic nozzle are discussed. Formulas for power usage, thrust, mass flow rate, and specific impulse are acquired and employed to assess thruster performance. The performance estimates are compared with the observed properties of an unoptimized coaxial plasma gun. These comparisons support the hypothesis that ideal MHD has an important role in coaxial plasma thruster dynamics.
Coaxial plasma thrusters for high specific impulse propulsion
NASA Technical Reports Server (NTRS)
Schoenberg, Kurt F.; Gerwin, Richard A.; Barnes, Cris W.; Henins, Ivars; Mayo, Robert; Moses, Ronald, Jr.; Scarberry, Richard; Wurden, Glen
1991-01-01
A fundamental basis for coaxial plasma thruster performance is presented and the steady-state, ideal MHD properties of a coaxial thruster using an annular magnetic nozzle are discussed. Formulas for power usage, thrust, mass flow rate, and specific impulse are acquired and employed to assess thruster performance. The performance estimates are compared with the observed properties of an unoptimized coaxial plasma gun. These comparisons support the hypothesis that ideal MHD has an important role in coaxial plasma thruster dynamics.
The wrapper: a surface optimization algorithm that preserves highly curved areas
NASA Astrophysics Data System (ADS)
Gueziec, Andre P.; Dean, David
1994-09-01
Software to construct polygonal models of anatomical structures embedded as isosurfaces in 3D medical images has been available since the mid 1970s. Such models are used for visualization, simulation, measurements (single and multi-modality image registration), and statistics. When working with standard MR- or CT-scans, the surface obtained can contain several million triangles. These models contain data an order of magnitude larger than that which can be efficiently handled by current workstations or transmitted through networks. These algorithms generally ignore efficient combinations that would produce fewer, well shaped triangles. An efficient algorithm must not create a larger data structure than present in the raw data. Recently, much research has been done on the simplification and optimization of surfaces ([Moore and Warren, 1991]); [Schroeder et al., 1992]; [Turk, 1992]; [Hoppe et al., 1993]; [Kalvin and Taylor, 1994]). All of these algorithms satisfy two criteria, consistency and accuracy, to some degree. Consistent simplification occurs via predictable patterns. Accuracy is measured in terms of fidelity to the original surface, and is a prerequisite for collecting reliable measurements from the simplified surface. We describe the 'Wrapper' algorithm that simplifies triangulated surfaces while preserving the same topological characteristics. We employ the same simplification operation in all cases. However, simplification is restricted but not forbidden in high curvature areas. This hierarchy of operations results in homogeneous triangle aspect and size. Images undergoing compression ratios between 10 and 20:1 are visually identical to full resolution images. More importantly, the metric accuracy of the simplified surfaces appears to be unimpaired. Measurements based upon 'ridge curves; (sensu [Cutting et al., 1993]) extracted on polygonal models were recently introduced [Ayache et al., 1993]. We compared ridge curves digitized from full resolution
High-stability algorithm for the three-pattern decomposition of global atmospheric circulation
NASA Astrophysics Data System (ADS)
Cheng, Jianbo; Gao, Chenbin; Hu, Shujuan; Feng, Guolin
2017-07-01
In order to study the atmospheric circulation from a global-wide perspective, the three-pattern decomposition of global atmospheric circulation (TPDGAC) has been proposed in our previous studies. In this work, to easily and accurately apply the TPDGAC in the diagnostic analysis of atmospheric circulation, a high-stability algorithm of the TPDGAC has been presented. By using the TPDGAC, the global atmospheric circulation is decomposed into the three-dimensional (3D) horizontal, meridional, and zonal circulations (three-pattern circulations). In particular, the global zonal mean meridional circulation is essentially the three-cell meridional circulation. To demonstrate the rationality and correctness of the proposed numerical algorithm, the climatology of the three-pattern circulations and the evolution characteristics of the strength and meridional width of the Hadley circulation during 1979-2015 have been investigated using five reanalysis datasets. Our findings reveal that the three-pattern circulations capture the main features of the Rossby, Hadley, and Walker circulations. The Hadley circulation shows a significant intensification during boreal winter in the Northern Hemisphere and shifts significantly poleward during boreal (austral) summer and autumn in the Northern (Southern) Hemisphere.
Align-m--a new algorithm for multiple alignment of highly divergent sequences.
Van Walle, Ivo; Lasters, Ignace; Wyns, Lode
2004-06-12
Multiple alignment of highly divergent sequences is a challenging problem for which available programs tend to show poor performance. Generally, this is due to a scoring function that does not describe biological reality accurately enough or a heuristic that cannot explore solution space efficiently enough. In this respect, we present a new program, Align-m, that uses a non-progressive local approach to guide a global alignment. Two large test sets were used that represent the entire SCOP classification and cover sequence similarities between 0 and 50% identity. Performance was compared with the publicly available algorithms ClustalW, T-Coffee and DiAlign. In general, Align-m has comparable or slightly higher accuracy in terms of correctly aligned residues, especially for distantly related sequences. Importantly, it aligns much fewer residues incorrectly, with average differences of over 15% compared with some of the other algorithms. Align-m and the test sets are available at http://bioinformatics.vub.ac.be
Snyder, Abigail C.; Jiao, Yu
2010-10-01
Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.
GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging
Pryor, Alan; Yang, Yongsoo; Rana, Arjun; ...
2017-09-05
Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less
Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric
2014-01-01
Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph (DAG)—a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer’s disease (AD) and reveal findings that could lead to advancements in AD research. PMID:22665720
Simulation of Trajectories for High Specific Impulse Deep Space Exploration
NASA Technical Reports Server (NTRS)
Polsgrove, Tara; Adams, Robert B.; Brady, Hugh J. (Technical Monitor)
2002-01-01
Difficulties in approximating flight times and deliverable masses for continuous thrust propulsion systems have complicated comparison and evaluation of proposed propulsion concepts. These continuous thrust propulsion systems are of interest to many groups, not the least of which are the electric propulsion and fusion communities. Several charts plotting the results of well-known trajectory simulation codes were developed and are contained in this paper. These charts illustrate the dependence of time of flight and payload ratio on jet power, initial mass, specific impulse and specific power. These charts are intended to be a tool by which people in the propulsion community can explore the possibilities of their propulsion system concepts. Trajectories were simulated using the tools VARITOP and IPOST. VARITOP is a well known trajectory optimization code that involves numerical integration based on calculus of variations. IPOST has several methods of trajectory simulation; the one used in this paper is Cowell's method for full integration of the equations of motion. The analytical method derived in the companion paper was also used to simulate the trajectory. The accuracy of this method is discussed in the paper.
Shift and Mean Algorithm for Functional Imaging with High Spatio-Temporal Resolution
Rama, Sylvain
2015-01-01
Understanding neuronal physiology requires to record electrical activity in many small and remote compartments such as dendrites, axon or dendritic spines. To do so, electrophysiology has long been the tool of choice, as it allows recording very subtle and fast changes in electrical activity. However, electrophysiological measurements are mostly limited to large neuronal compartments such as the neuronal soma. To overcome these limitations, optical methods have been developed, allowing the monitoring of changes in fluorescence of fluorescent reporter dyes inserted into the neuron, with a spatial resolution theoretically only limited by the dye wavelength and optical devices. However, the temporal and spatial resolutive power of functional fluorescence imaging of live neurons is often limited by a necessary trade-off between image resolution, signal to noise ratio (SNR) and speed of acquisition. Here, I propose to use a Super-Resolution Shift and Mean (S&M) algorithm previously used in image computing to improve the SNR, time sampling and spatial resolution of acquired fluorescent signals. I demonstrate the benefits of this methodology using two examples: voltage imaging of action potentials (APs) in soma and dendrites of CA3 pyramidal cells and calcium imaging in the dendritic shaft and spines of CA3 pyramidal cells. I show that this algorithm allows the recording of a broad area at low speed in order to achieve a high SNR, and then pick the signal in any small compartment and resample it at high speed. This method allows preserving both the SNR and the temporal resolution of the signal, while acquiring the original images at high spatial resolution. PMID:26635526
2014-01-01
Background Databases of medical claims can be valuable resources for cardiovascular research, such as comparative effectiveness and pharmacovigilance studies of cardiovascular medications. However, claims data do not include all of the factors used for risk stratification in clinical care. We sought to develop claims-based algorithms to identify individuals at high estimated risk for coronary heart disease (CHD) events, and to identify uncontrolled low-density lipoprotein (LDL) cholesterol among statin users at high risk for CHD events. Methods We conducted a cross-sectional analysis of 6,615 participants ≥66 years old using data from the REasons for Geographic And Racial Differences in Stroke (REGARDS) study baseline visit in 2003–2007 linked to Medicare claims data. Using REGARDS data we defined high risk for CHD events as having a history of CHD, at least 1 risk equivalent, or Framingham CHD risk score >20%. Among statin users at high risk for CHD events we defined uncontrolled LDL cholesterol as LDL cholesterol ≥100 mg/dL. Using Medicare claims-based variables for diagnoses, procedures, and healthcare utilization, we developed algorithms for high CHD event risk and uncontrolled LDL cholesterol. Results REGARDS data indicated that 49% of participants were at high risk for CHD events. A claims-based algorithm identified high risk for CHD events with a positive predictive value of 87% (95% CI: 85%, 88%), sensitivity of 69% (95% CI: 67%, 70%), and specificity of 90% (95% CI: 89%, 91%). Among statin users at high risk for CHD events, 30% had LDL cholesterol ≥100 mg/dL. A claims-based algorithm identified LDL cholesterol ≥100 mg/dL with a positive predictive value of 43% (95% CI: 38%, 49%), sensitivity of 19% (95% CI: 15%, 22%), and specificity of 89% (95% CI: 86%, 90%). Conclusions Although the sensitivity was low, the high positive predictive value of our algorithm for high risk for CHD events supports the use of claims to identify Medicare
Yun, Yejin Esther; Cotton, Cecilia A; Edginton, Andrea N
2014-02-01
Physiologically based pharmacokinetic (PBPK) modeling is a tool used in drug discovery and human health risk assessment. PBPK models are mathematical representations of the anatomy, physiology and biochemistry of an organism and are used to predict a drug's pharmacokinetics in various situations. Tissue to plasma partition coefficients (Kp), key PBPK model parameters, define the steady-state concentration differential between tissue and plasma and are used to predict the volume of distribution. The experimental determination of these parameters once limited the development of PBPK models; however, in silico prediction methods were introduced to overcome this issue. The developed algorithms vary in input parameters and prediction accuracy, and none are considered standard, warranting further research. In this study, a novel decision-tree-based Kp prediction method was developed using six previously published algorithms. The aim of the developed classifier was to identify the most accurate tissue-specific Kp prediction algorithm for a new drug. A dataset consisting of 122 drugs was used to train the classifier and identify the most accurate Kp prediction algorithm for a certain physicochemical space. Three versions of tissue-specific classifiers were developed and were dependent on the necessary inputs. The use of the classifier resulted in a better prediction accuracy than that of any single Kp prediction algorithm for all tissues, the current mode of use in PBPK model building. Because built-in estimation equations for those input parameters are not necessarily available, this Kp prediction tool will provide Kp prediction when only limited input parameters are available. The presented innovative method will improve tissue distribution prediction accuracy, thus enhancing the confidence in PBPK modeling outputs.
Kim, Chang Kug; Kikuchi, Shoshi; Hahn, Jang Ho; Park, Soo Chul; Kim, Yong Hwan; Lee, Byun Woo
2010-01-01
This study identifies 2,617 candidate genes related to anthocyanin biosynthesis in rice using microarray analysis and a newly developed maximum boundary range algorithm. Three seed developmental stages were examined in white cultivar and two black Dissociation insertion mutants. The resultant 235 transcription factor genes found to be associated with anthocyanin were classified into nine groups. It is compared the 235 genes by transcription factor analysis and 593 genes from among clusters of COGs related to anthocyanin functions. Total 32 genes were found to be expressed commonly. Among these, 9 unknown and hypothetical genes were revealed to be expressed at each developmental stage and were verified by RT-PCR. These genes most likely play regulatory roles in either anthocyanin production or metabolism during flavonoid biosynthesis. While these genes require further validation, our results underline the potential usefulness of the newly developed algorithm. PMID:21079756
Video-rate nanoscopy enabled by sCMOS camera-specific single-molecule localization algorithms
Huang, Fang; Hartwich, Tobias M. P.; Rivera-Molina, Felix E.; Lin, Yu; Duim, Whitney C.; Long, Jane J.; Uchil, Pradeep D.; Myers, Jordan R.; Baird, Michelle A.; Mothes, Walther; Davidson, Michael W.; Toomre, Derek; Bewersdorf, Joerg
2013-01-01
Newly developed scientific complementary metal–oxide–semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition in single-molecule switching nanoscopy (SMSN) while simultaneously increasing the effective quantum efficiency. However, sCMOS-intrinsic pixel-dependent readout noise substantially reduces the localization precision and introduces localization artifacts. Here we present algorithms that overcome these limitations and provide unbiased, precise localization of single molecules at the theoretical limit. In combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at up to 32 reconstructed images/second (recorded at 1,600–3,200 camera frames/second) in both fixed and living cells. PMID:23708387
Micro-channel-based high specific power lithium target
NASA Astrophysics Data System (ADS)
Mastinu, P.; Martın-Hernández, G.; Praena, J.; Gramegna, F.; Prete, G.; Agostini, P.; Aiello, A.; Phoenix, B.
2016-11-01
A micro-channel-based heat sink has been produced and tested. The device has been developed to be used as a Lithium target for the LENOS (Legnaro Neutron Source) facility and for the production of radioisotope. Nevertheless, applications of such device can span on many areas: cooling of electronic devices, diode laser array, automotive applications etc. The target has been tested using a proton beam of 2.8MeV energy and delivering total power shots from 100W to 1500W with beam spots varying from 5mm2 to 19mm2. Since the target has been designed to be used with a thin deposit of lithium and since lithium is a low-melting-point material, we have measured that, for such application, a specific power of about 3kW/cm2 can be delivered to the target, keeping the maximum surface temperature not exceeding 150° C.
NASA Astrophysics Data System (ADS)
Long, Tang; Hu, Wang; Yong, Cai; Lichen, Mao; Guangyao, Li
2011-08-01
Springback is related to multi-factors in the process of metal forming. In order to construct an accurate metamodel between technical parameters and springback, a general set of quantitative model assessment and analysis tool, termed high dimension model representations (HDMR), is applied to building metamodel. Genetic algorithm is also integrated for optimization based on metamodel. Compared with widely used metamodeling techniques, the most remarkable advantage of this method is its capacity to dramatically reduce sampling effort for learning the input-output behavior from exponential growth to polynomial level. In this work, the blank holding forces (BHFs) and corresponding key time are design variables. The final springback is well controlled by the HDMR-based metamodeling technique.
Edge Polynomial Fractal Compression Algorithm for High Quality Video Transmission. Final report
Lin, Freddie
1999-06-01
In this final report, Physical Optics Corporation (POC) provides a review of its Edge Polynomial Autonomous Compression (EPAC) technology. This project was undertaken to meet the need for low bandwidth transmission of full-motion video images. In addition, this report offers a synopsis of the logical data representation study that was performed to compress still images and video. The mapping singularities and polynomial representation of 3-D surfaces were found to be ideal for very high image compression. Our efforts were then directed to extending the EPAC algorithm for the motion of singularities by tracking the 3-D coordinates of characteristic points and the development of system components. Finally, we describe the integration of the software with the hardware components. This process consists of acquiring and processing each separate camera view, combining the information from different cameras to calculate the location of an object in three dimensions, and tracking the information history and the behavior of the objects.
NASA Astrophysics Data System (ADS)
Zhao, Jianhua; Zeng, Haishan; Kalia, Sunil; Lui, Harvey
2017-02-01
Background: Raman spectroscopy is a non-invasive optical technique which can measure molecular vibrational modes within tissue. A large-scale clinical study (n = 518) has demonstrated that real-time Raman spectroscopy could distinguish malignant from benign skin lesions with good diagnostic accuracy; this was validated by a follow-up independent study (n = 127). Objective: Most of the previous diagnostic algorithms have typically been based on analyzing the full band of the Raman spectra, either in the fingerprint or high wavenumber regions. Our objective in this presentation is to explore wavenumber selection based analysis in Raman spectroscopy for skin cancer diagnosis. Methods: A wavenumber selection algorithm was implemented using variably-sized wavenumber windows, which were determined by the correlation coefficient between wavenumbers. Wavenumber windows were chosen based on accumulated frequency from leave-one-out cross-validated stepwise regression or least and shrinkage selection operator (LASSO). The diagnostic algorithms were then generated from the selected wavenumber windows using multivariate statistical analyses, including principal component and general discriminant analysis (PC-GDA) and partial least squares (PLS). A total cohort of 645 confirmed lesions from 573 patients encompassing skin cancers, precancers and benign skin lesions were included. Lesion measurements were divided into training cohort (n = 518) and testing cohort (n = 127) according to the measurement time. Result: The area under the receiver operating characteristic curve (ROC) improved from 0.861-0.891 to 0.891-0.911 and the diagnostic specificity for sensitivity levels of 0.99-0.90 increased respectively from 0.17-0.65 to 0.20-0.75 by selecting specific wavenumber windows for analysis. Conclusion: Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels.
High Specific Energy Pulsed Electric Discharge Laser Research.
1975-12-01
drop out excess water, filtered, dried, filtered again, and then pumped up to the storage bottle pressure (Fig. 47). At the exit of the high...pressure pump, an oil filter was used to remove any oil that may have been introduced by the compressor. Bottles were pumped up to 2000 psig...Lowder, R. S. , "Air-Combustion Product N2-C02 Electric Laser, " J. Appl. Phys. Lett. 26, 373 (1975). 5. Miller, D. J. and Millikan , R. C
Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change
NASA Astrophysics Data System (ADS)
Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel
2014-05-01
Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments
Omelyan, I P; Mryglod, I M; Folk, R
2002-08-01
A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach.
NASA Astrophysics Data System (ADS)
Omelyan, I. P.; Mryglod, I. M.; Folk, R.
2002-08-01
A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach.
A binned clustering algorithm to detect high-Z material using cosmic muons
NASA Astrophysics Data System (ADS)
Thomay, C.; Velthuis, J. J.; Baesso, P.; Cussans, D.; Morris, P. A. W.; Steer, C.; Burns, J.; Quillin, S.; Stapleton, M.
2013-10-01
We present a novel approach to the detection of special nuclear material using cosmic rays. Muon Scattering Tomography (MST) is a method for using cosmic muons to scan cargo containers and vehicles for special nuclear material. Cosmic muons are abundant, highly penetrating, not harmful for organic tissue, cannot be screened against, and can easily be detected, which makes them highly suited to the use of cargo scanning. Muons undergo multiple Coulomb scattering when passing through material, and the amount of scattering is roughly proportional to the square of the atomic number Z of the material. By reconstructing incoming and outgoing tracks, we can obtain variables to identify high-Z material. In a real life application, this has to happen on a timescale of 1 min and thus with small numbers of muons. We have built a detector system using resistive plate chambers (RPCs): 12 layers of RPCs allow for the readout of 6 x and 6 y positions, by which we can reconstruct incoming and outgoing tracks. In this work we detail the performance of an algorithm by which we separate high-Z targets from low-Z background, both for real data from our prototype setup and for MC simulation of a cargo container-sized setup. (c) British Crown Owned Copyright 2013/AWE
Range-Specific High-resolution Mesoscale Model Setup
NASA Technical Reports Server (NTRS)
Watson, Leela R.
2013-01-01
This report summarizes the findings from an AMU task to determine the best model configuration for operational use at the ER and WFF to best predict winds, precipitation, and temperature. The AMU ran test cases in the warm and cool seasons at the ER and for the spring and fall seasons at WFF. For both the ER and WFF, the ARW core outperformed the NMM core. Results for the ER indicate that the Lin microphysical scheme and the YSU PBL scheme is the optimal model configuration for the ER. It consistently produced the best surface and upper air forecasts, while performing fairly well for the precipitation forecasts. Both the Ferrier and Lin microphysical schemes in combination with the YSU PBL scheme performed well for WFF in the spring and fall seasons. The AMU has been tasked with a follow-on modeling effort to recommended local DA and numerical forecast model design optimized for both the ER and WFF to support space launch activities. The AMU will determine the best software and type of assimilation to use, as well as determine the best grid resolution for the initialization based on spatial and temporal availability of data and the wall clock run-time of the initialization. The AMU will transition from the WRF EMS to NU-WRF, a NASA-specific version of the WRF that takes advantage of unique NASA software and datasets. 37
Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.
2000-01-01
We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.
Leipsic, Jonathon; Labounty, Troy M; Hague, Cameron J; Mancini, G B John; O'Brien, Julie M; Wood, David A; Taylor, Carolyn M; Cury, Ricardo C; Earls, James P; Heilbron, Brett G; Ajlan, Amr M; Feuchtner, Gudrun; Min, James K
2012-01-01
Although coronary CT angiography (CTA) shows high diagnostic performance for detection and exclusion of obstructive coronary artery disease, limited temporal resolution of current-generation CT scanners may allow for motion artifacts, which may result in nonevaluable coronary segments. We assessed a novel vendor-specific motion-correction algorithm for its effect on image quality and diagnostic accuracy. Thirty-six consecutive patients with severe aortic stenosis undergoing coronary CTA without rate control and invasive coronary angiography as part of an evaluation for transcatheter aortic valve replacement. We compared image quality and diagnostic accuracy between standard (STD) and motion-corrected (MC) reconstructions. Coronary CTAs were interpreted in an intent-to-diagnose fashion by 2 experienced readers; a third reader provided consensus for interpretability and obstructive coronary stenosis (≥50% stenosis). All studies were interpreted with and without motion correction using both 45% and 75% of the R-R interval for reconstructions. Quantitative coronary angiography was performed by a core laboratory. Mean age was 83.0 ± 6.4 years; 47% were men. Overall image quality (graded 1-4) was higher with the use of MC versus STD reconstructions (2.9 ± 0.9 vs 2.4 ± 1.0; P < 0.001). MC reconstructions showed higher interpretability on a per-segment [97% (392/406) vs 88% (357/406); P < 0.001] and per-artery [96% (128/134) vs 84% (112/134); P = 0.002] basis, with no difference on a per-patient level [92% (33/36) vs 89% (32/36); P = 1.0]. Diagnostic accuracy by MC reconstruction was higher than STD reconstruction on a per-segment [91% (370/406) vs 78% (317/406); P < 0.001] and per-artery level [86% (115/134) vs 72% (96/134); P = 0.007] basis, with no significant difference on a per-patient level [86% (31/36) vs 69% (25/36); P = 0.16]. The use of a novel MC algorithm improves image quality, interpretability, and diagnostic accuracy in persons undergoing coronary CTA
Chen, Qiang; Chen, Yun-hao; Jiang, Wei-guo
2015-06-01
The high spatial resolution remotely sensed imagery has abundant detailed information of earth surface, and the multi-temporal change detection for the high resolution remotely sensed imagery can realize the variations of geographical unit. In terms of the high spatial resolution remotely sensed imagery, the traditional remote sensing change detection algorithms have obvious defects. In this paper, learning from the object-based image analysis idea, we proposed a semi-automatic threshold selection algorithm named OB-HMAD (object-based-hybrid-MAD), on the basis of object-based image analysis and multivariate alternative detection algorithm (MAD), which used the spectral features of remotely sensed imagery into the field of object-based change detection. Additionally, OB-HMAD algorithm has been compared with other the threshold segmentation algorithms by the change detection experiment. Firstly, we obtained the image object by the multi-solution segmentation algorithm. Secondly, we got the object-based difference image object using MAD and minimum noise fraction rotation (MNF) for improving the SNR of the image object. Then, the change objects or area are classified using histogram curvature analysis (HCA) method for the semi-automatic threshold selection, which determined the threshold by calculated the maximum value of curvature of the histogram, so the HCA algorithm has better automation than other threshold segmentation algorithms. Finally, the change detection results are validated using confusion matrix with the field sample data. Worldview-2 imagery of 2012 and 2013 in case study of Beijing were used to validate the proposed OB-HMAD algorithm. The experiment results indicated that OB-HMAD algorithm which integrated the multi-channel spectral information could be effectively used in multi-temporal high resolution remotely sensed imagery change detection, and it has basically solved the "salt and pepper" problem which always exists in the pixel-based change
High Specific Power Motors in LN2 and LH2
NASA Technical Reports Server (NTRS)
Brown, Gerald V.; Jansen, Ralph H.; Trudell, Jeffrey J.
2007-01-01
A switched reluctance motor has been operated in liquid nitrogen (LN2) with a power density as high as that reported for any motor or generator. The high performance stems from the low resistivity of Cu at LN2 temperature and from the geometry of the windings, the combination of which permits steady-state rms current density up to 7000 A/cm2, about 10 times that possible in coils cooled by natural convection at room temperature. The Joule heating in the coils is conducted to the end turns for rejection to the LN2 bath. Minimal heat rejection occurs in the motor slots, preserving that region for conductor. In the end turns, the conductor layers are spaced to form a heat-exchanger-like structure that permits nucleate boiling over a large surface area. Although tests were performed in LN2 for convenience, this motor was designed as a prototype for use with liquid hydrogen (LH2) as the coolant. End-cooled coils would perform even better in LH2 because of further increases in copper electrical and thermal conductivities. Thermal analyses comparing LN2 and LH2 cooling are presented verifying that end-cooled coils in LH2 could be either much longer or could operate at higher current density without thermal runaway than in LN2.
High Specific Power Motors in LN2 and LH2
NASA Technical Reports Server (NTRS)
Brown, Gerald V.; Jansen, Ralph H.; Trudell, Jeffrey J.
2007-01-01
A switched reluctance motor has been operated in liquid nitrogen (LN2) with a power density as high as that reported for any motor or generator. The high performance stems from the low resistivity of Cu at LN2 temperature and from the geometry of the windings, the combination of which permits steady-state rms current density up to 7000 A/sq cm, about 10 times that possible in coils cooled by natural convection at room temperature. The Joule heating in the coils is conducted to the end turns for rejection to the LN2 bath. Minimal heat rejection occurs in the motor slots, preserving that region for conductor. In the end turns, the conductor layers are spaced to form a heat-exchanger-like structure that permits nucleate boiling over a large surface area. Although tests were performed in LN2 for convenience, this motor was designed as a prototype for use with liquid hydrogen (LH2) as the coolant. End-cooled coils would perform even better in LH2 because of further increases in copper electrical and thermal conductivities. Thermal analyses comparing LN2 and LH2 cooling are presented verifying that end-cooled coils in LH2 could be either much longer or could operate at higher current density without thermal runaway than in LN2.
Kays, David W.; Islam, Saleem; Larson, Shawn D.; Perkins, Joy; Talbert, James L.
2015-01-01
Objective To assess the impact of varying approaches to CDH repair timing on survival and need for ECMO when controlled for anatomic and physiologic disease severity in a large consecutive series of CDH patients. Summary Background Data Our publication of 60 consecutive CDH patients in 1999 showed that survival is significantly improved by limiting lung inflation pressures and eliminating hyperventilation. Methods We retrospectively reviewed 268 consecutive CDH patients, combining 208 new patients with the 60 previously reported. Management and ventilator strategy were highly consistent throughout. Varying approaches to surgical timing were applied as the series matured. Results Patients with anatomically less-severe left liver-down CDH had significantly increased need for ECMO if repaired in the first 48 hours, while patients with more-severe left liver-up CDH survived at a higher rate when repair was performed before ECMO. Overall survival of 268 patients was 78%. For those without lethal associated anomalies, survival was 88%. Of these, 99% of left liver-down CDH survived, 91% of right CDH survived. and 76% of left liver-up CDH survived. Conclusions This study shows that patients with anatomically less severe CDH benefit from delayed surgery while patients with anatomically more severe CDH may benefit from a more aggressive surgical approach. These findings show that patients respond differently across the CDH anatomic severity spectrum, and lay the foundation for the development of risk specific treatment protocols for patients with CDH. PMID:23989050
NASA Astrophysics Data System (ADS)
Jun, Xie Cheng; Su, Yan; Wei, Zhang
2006-08-01
In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.
High Performance Computing - Power Application Programming Interface Specification.
Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David
2014-08-01
Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.
Preparation of tritium-labeled tetrahydropteroylpolyglutamates of high specific radioactivity
Paquin, J.; Baugh, C.M.; MacKenzie, R.E.
1985-04-01
Tritium-labeled (6S)-tetrahydropteroylpolyglutamates of high radiospecific activity were prepared from the corresponding pteroylpolyglutamates. Malic enzyme and D,L-(2-/sup 3/H)malate were used as a generating system to produce (4A-/sup 3/H)NADPH which was coupled to the dihydrofolate reductase-catalyzed reduction of chemically prepared dihydropteroylpolyglutamate derivatives. Passage of the reaction mixtures through a column of immobilized boronate effectively removed NADPH, and the tetrahydropteroylpolyglutamates were subsequently purified by chromatography on DEAE-cellulose. Overall yields of the (6S)-tetrahydro derivatives were 18-48% and the radiospecific activities were 3-4.5 mCi X mumol-1.
Treier, Katrin; Berg, Annette; Diederich, Patrick; Lang, Katharina; Osberghaus, Anna; Dismer, Florian; Hubbuch, Jürgen
2012-10-01
Compared to traditional strategies, application of high-throughput experiments combined with optimization methods can potentially speed up downstream process development and increase our understanding of processes. In contrast to the method of Design of Experiments in combination with response surface analysis (RSA), optimization approaches like genetic algorithms (GAs) can be applied to identify optimal parameter settings in multidimensional optimizations tasks. In this article the performance of a GA was investigated applying parameters applicable in high-throughput downstream process development. The influence of population size, the design of the initial generation and selection pressure on the optimization results was studied. To mimic typical experimental data, four mathematical functions were used for an in silico evaluation. The influence of GA parameters was minor on landscapes with only one optimum. On landscapes with several optima, parameters had a significant impact on GA performance and success in finding the global optimum. Premature convergence increased as the number of parameters and noise increased. RSA was shown to be comparable or superior for simple systems and low to moderate noise. For complex systems or high noise levels, RSA failed, while GA optimization represented a robust tool for process optimization. Finally, the effect of different objective functions is shown exemplarily for a refolding optimization of lysozyme. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong
2016-09-01
With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.
NASA Astrophysics Data System (ADS)
Zheng, Jun-Xi; Zhang, Ping; Li, Fang; Du, Guang-Long
2016-09-01
Although the sequence-dependent setup times flowshop problem with the total weighted tardiness minimization objective exists widely in industry, work on the problem has been scant in the existing literature. To the authors' best knowledge, the NEH?EWDD heuristic and the Iterated Greedy (IG) algorithm with descent local search have been regarded as the high performing heuristic and the state-of-the-art algorithm for the problem, which are both based on insertion search. In this article firstly, an efficient backtracking algorithm and a novel heuristic (HPIS) are presented for insertion search. Accordingly, two heuristics are introduced, one is NEH?EWDD with HPIS for insertion search, and the other is the combination of NEH?EWDD and both the two methods. Furthermore, the authors improve the IG algorithm with the proposed methods. Finally, experimental results show that both the proposed heuristics and the improved IG (IG*) significantly outperform the original ones.
NASA Astrophysics Data System (ADS)
Liu, Rengli; Wang, Yanfei
2016-04-01
An extended nonlinear chirp scaling (NLCS) algorithm is proposed to process data of highly squinted, high-resolution, missile-borne synthetic aperture radar (SAR) diving with a constant acceleration. Due to the complex diving movement, the traditional signal model and focusing algorithm are no longer suited for missile-borne SAR signal processing. Therefore, an accurate range equation is presented, named as the equivalent hyperbolic range model (EHRM), which is more accurate and concise compared with the conventional fourth-order polynomial range equation. Based on the EHRM, a two-dimensional point target reference spectrum is derived, and an extended NLCS algorithm for missile-borne SAR image formation is developed. In the algorithm, a linear range walk correction is used to significantly remove the range-azimuth cross coupling, and an azimuth NLCS processing is adopted to solve the azimuth space variant focusing problem. Moreover, the operations of the proposed algorithm are carried out without any interpolation, thus having small computational loads. Finally, the simulation results and real-data processing results validate the proposed focusing algorithm.
Wang, Xueyi
2011-01-01
The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 106 records and 104 dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces. PMID:22247818
Wang, Xueyi
2012-02-08
The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.
Unallocated Off-Specification Highly Enriched Uranium: Recommendations for Disposition
Bridges, D. N.; Boeke, S. G.; Tousley, D. R.; Bickford, W.; Goergen, C.; Williams, W.; Hassler, M.; Nelson, T.; Keck, R.; Arbital, J.
2002-02-27
The U.S. Department of Energy (DOE) has made significant progress with regard to disposition planning for 174 metric tons (MTU) of surplus Highly Enriched Uranium (HEU). Approximately 55 MTU of this 174 MTU are ''offspec'' HEU. (''Off-spec'' signifies that the isotopic or chemical content of the material does not meet the American Society for Testing and Materials standards for commercial nuclear reactor fuel.) Approximately 33 of the 55 MTU have been allocated to off-spec commercial reactor fuel per an Interagency Agreement between DOE and the Tennessee Valley Authority (1). To determine disposition plans for the remaining {approx}22 MTU, the DOE National Nuclear Security Administration (NNSA) Office of Fissile Materials Disposition (OFMD) and the DOE Office of Environmental Management (EM) co-sponsored this technical study. This paper represents a synopsis of the formal technical report (NNSA/NN-0014). The {approx} 22 MTU of off-spec HEU inventory in this study were divided into two main groupings: one grouping with plutonium (Pu) contamination and one grouping without plutonium. This study identified and evaluated 26 potential paths for the disposition of this HEU using proven decision analysis tools. This selection process resulted in recommended and alternative disposition paths for each group of HEU. The evaluation and selection of these paths considered criteria such as technical maturity, programmatic issues, cost, schedule, and environment, safety and health compliance. The primary recommendations from the analysis are comprised of 7 different disposition paths. The study recommendations will serve as a technical basis for subsequent programmatic decisions as disposition of this HEU moves into the implementation phase.
Roth, Andreas; Reischl, Udo; Streubel, Anna; Naumann, Ludmila; Kroppenstedt, Reiner M.; Habicht, Marion; Fischer, Marga; Mauch, Harald
2000-01-01
A novel genus-specific PCR for mycobacteria with simple identification to the species level by restriction fragment length polymorphism (RFLP) was established using the 16S-23S ribosomal RNA gene (rDNA) spacer as a target. Panspecificity of primers was demonstrated on the genus level by testing 811 bacterial strains (122 species in 37 genera from 286 reference strains and 525 clinical isolates). All mycobacterial isolates (678 strains among 48 defined species and 5 indeterminate taxons) were amplified by the new primers. Among nonmycobacterial isolates, only Gordonia terrae was amplified. The RFLP scheme devised involves estimation of variable PCR product sizes together with HaeIII and CfoI restriction analysis. It yielded 58 HaeIII patterns, of which 49 (84%) were unique on the species level. Hence, HaeIII digestion together with CfoI results was sufficient for correct identification of 39 of 54 mycobacterial taxons and one of three or four of seven RFLP genotypes found in Mycobacterium intracellulare and Mycobacterium kansasii, respectively. Following a clearly laid out diagnostic algorithm, the remaining unidentified organisms fell into five clusters of closely related species (i.e., the Mycobacterium avium complex or Mycobacterium chelonae-Mycobacterium abscessus) that were successfully separated using additional enzymes (TaqI, MspI, DdeI, or AvaII). Thus, next to slowly growing mycobacteria, all rapidly growing species studied, including M. abscessus, M. chelonae, Mycobacterium farcinogenes, Mycobacterium fortuitum, Mycobacterium peregrinum, and Mycobacterium senegalense (with a very high 16S rDNA sequence similarity) were correctly identified. A high intraspecies sequence stability and the good discriminative power of patterns indicate that this method is very suitable for rapid and cost-effective identification of a wide variety of mycobacterial species without the need for sequencing. Phylogenetically, spacer sequence data stand in good agreement with 16S r
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
NASA Astrophysics Data System (ADS)
Xiao, Jianyuan; Qin, Hong; Liu, Jian; He, Yang; Zhang, Ruili; Sun, Yajuan
2015-11-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
Xiao, Jianyuan; Qin, Hong; Liu, Jian; He, Yang; Zhang, Ruili; Sun, Yajuan
2015-11-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
Xiao, Jianyuan; Liu, Jian; He, Yang; Zhang, Ruili; Qin, Hong; Sun, Yajuan
2015-11-15
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.
Ratcliffe, Julie; Huynh, Elisabeth; Chen, Gang; Stevens, Katherine; Swait, Joffre; Brazier, John; Sawyer, Michael; Roberts, Rachel; Flynn, Terry
2016-05-01
In contrast to the recent proliferation of studies incorporating ordinal methods to generate health state values from adults, to date relatively few studies have utilised ordinal methods to generate health state values from adolescents. This paper reports upon a study to apply profile case best worst scaling methods to derive a new adolescent specific scoring algorithm for the Child Health Utility 9D (CHU9D), a generic preference based instrument that has been specifically designed for the estimation of quality adjusted life years for the economic evaluation of health care treatment and preventive programs targeted at young people. A survey was developed for administration in an on-line format in which consenting community based Australian adolescents aged 11-17 years (N = 1982) indicated the best and worst features of a series of 10 health states derived from the CHU9D descriptive system. The data were analyzed using latent class conditional logit models to estimate values (part worth utilities) for each level of the nine attributes relating to the CHU9D. A marginal utility matrix was then estimated to generate an adolescent-specific scoring algorithm on the full health = 1 and dead = 0 scale required for the calculation of QALYs. It was evident that different decision processes were being used in the best and worst choices. Whilst respondents appeared readily able to choose 'best' attribute levels for the CHU9D health states, a large amount of random variability and indeed different decision rules were evident for the choice of 'worst' attribute levels, to the extent that the best and worst data should not be pooled from the statistical perspective. The optimal adolescent-specific scoring algorithm was therefore derived using data obtained from the best choices only. The study provides important insights into the use of profile case best worst scaling methods to generate health state values with adolescent populations. Copyright © 2016. Published by
Algorithm Animation with Galant.
Stallmann, Matthias F
2017-01-01
Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.
Peleg, Eran; Herblum, Ryan; Beek, Maarten; Joskowicz, Leo; Liebergall, Meir; Mosheiff, Rami; Whyne, Cari
2014-01-01
The reliability of patient-specific finite element (FE) modelling is dependent on the ability to provide repeatable analyses. Differences of inter-operator generated grids can produce variability in strain and stress readings at a desired location, which are magnified at the surface of the model as a result of the partial volume edge effects (PVEEs). In this study, a new approach is introduced based on an in-house developed algorithm which adjusts the location of the model's surface nodes to a consistent predefined threshold Hounsfield unit value. Three cadaveric human femora specimens were CT scanned, and surface models were created after a semi-automatic segmentation by three different experienced operators. A FE analysis was conducted for each model, with and without applying the surface-adjustment algorithm (a total of 18 models), implementing identical boundary conditions. Maximum principal strain and stress and spatial coordinates were probed at six equivalent surface nodes from the six generated models for each of the three specimens at locations commonly utilised for experimental strain guage measurement validation. A Wilcoxon signed-ranks test was conducted to determine inter-operator variability and the impact of the PVEE-adjustment algorithm. The average inter-operator difference in stress values was significantly reduced after applying the adjustment algorithm (before: 3.32 ± 4.35 MPa, after: 1.47 ± 1.77 MPa, p = 0.025). Strain values were found to be less sensitive to inter-operative variability (p = 0.286). In summary, the new approach as presented in this study may provide a means to improve the repeatability of subject-specific FE models of bone obtained from CT data.
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Wood, David; Sorensen, Stephen E.
1996-12-01
This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.
Genetic algorithm-based feature selection in high-resolution NMR spectra.
Cho, Hyun-Woo; Kim, Seoung Bum; Jeong, Myong K; Park, Youngja; Ziegler, Thomas R; Jones, Dean P
2008-10-01
High-resolution nuclear magnetic resonance (NMR) spectroscopy has provided a new means for detection and recognition of metabolic changes in biological systems in response to pathophysiological stimuli and to the intake of toxins or nutrition. To identify meaningful patterns from NMR spectra, various statistical pattern recognition methods have been applied to reduce their complexity and uncover implicit metabolic patterns. In this paper, we present a genetic algorithm (GA)-based feature selection method to determine major metabolite features to play a significant role in discrimination of samples among different conditions in high-resolution NMR spectra. In addition, an orthogonal signal filter was employed as a preprocessor of NMR spectra in order to remove any unwanted variation of the data that is unrelated to the discrimination of different conditions. The results of k-nearest neighbors and the partial least squares discriminant analysis of the experimental NMR spectra from human plasma showed the potential advantage of the features obtained from GA-based feature selection combined with an orthogonal signal filter.
Genetic algorithm-based feature selection in high-resolution NMR spectra
Cho, Hyun-Woo; Jeong, Myong K.; Park, Youngja; Ziegler, Thomas R.; Jones, Dean P.
2011-01-01
High-resolution nuclear magnetic resonance (NMR) spectroscopy has provided a new means for detection and recognition of metabolic changes in biological systems in response to pathophysiological stimuli and to the intake of toxins or nutrition. To identify meaningful patterns from NMR spectra, various statistical pattern recognition methods have been applied to reduce their complexity and uncover implicit metabolic patterns. In this paper, we present a genetic algorithm (GA)-based feature selection method to determine major metabolite features to play a significant role in discrimination of samples among different conditions in high-resolution NMR spectra. In addition, an orthogonal signal filter was employed as a preprocessor of NMR spectra in order to remove any unwanted variation of the data that is unrelated to the discrimination of different conditions. The results of k-nearest neighbors and the partial least squares discriminant analysis of the experimental NMR spectra from human plasma showed the potential advantage of the features obtained from GA-based feature selection combined with an orthogonal signal filter. PMID:21472035
An inverse kinematics algorithm for a highly redundant variable-geometry-truss manipulator
NASA Technical Reports Server (NTRS)
Naccarato, Frank; Hughes, Peter
1989-01-01
A new class of robotic arm consists of a periodic sequence of truss substructures, each of which has several variable-length members. Such variable-geometry-truss manipulator (VGTMs) are inherently highly redundant and promise a significant increase in dexterity over conventional anthropomorphic manipulators. This dexterity may be exploited for both obstacle avoidance and controlled deployment in complex workspaces. The inverse kinematics problem for such unorthodox manipulators, however, becomes complex because of the large number of degrees of freedom, and conventional solutions to the inverse kinematics problem become inefficient because of the high degree of redundancy. A solution is presented to this problem based on a spline-like reference curve for the manipulator's shape. Such an approach has a number of advantages: (1) direct, intuitive manipulation of shape; (2) reduced calculation time; and (3) direct control over the effective degree of redundancy of the manipulator. Furthermore, although the algorithm was developed primarily for variable-geometry-truss manipulators, it is general enough for application to a number of manipulator designs.
NASA Astrophysics Data System (ADS)
Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso
2017-09-01
This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.
Lahanas, M; Baltas, D; Giannouli, S
2003-03-07
We consider the problem of the global convergence of gradient-based optimization algorithms for interstitial high-dose-rate (HDR) brachytherapy dose optimization using variance-based objectives. Possible local minima could lead to only sub-optimal solutions. We perform a configuration space analysis using a representative set of the entire non-dominated solution space. A set of three prostate implants is used in this study. We compare the results obtained by conjugate gradient algorithms, two variable metric algorithms and fast-simulated annealing. For the variable metric algorithm BFGS from numerical recipes, large fluctuations are observed. The limited memory L-BFGS algorithm and the conjugate gradient algorithm FRPR are globally convergent. Local minima or degenerate states are not observed. We study the possibility of obtaining a representative set of non-dominated solutions using optimal solution rearrangement and a warm start mechanism. For the surface and volume dose variance and their derivatives, a method is proposed which significantly reduces the number of required operations. The optimization time, ignoring a preprocessing step, is independent of the number of sampling points in the planning target volume. Multiobjective dose optimization in HDR brachytherapy using L-BFGS and a new modified computation method for the objectives and derivatives has been accelerated, depending on the number of sampling points, by a factor in the range 10-100.
Xia, Peng; Shimozato, Yuki; Tahara, Tatsuki; Kakue, Takashi; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu
2013-01-01
We propose an image reconstruction algorithm for recovering high-frequency information in parallel phase-shifting digital holography. The proposed algorithm applies three kinds of interpolations and generates three different kinds of object waves. A Fourier transform is applied to each object wave, and the spatial-frequency domain is divided into 3×3 segments for each Fourier-transformed object wave. After that the segment in which interpolation error is the least among the segments having the same address of the segment in the spatial-frequency domain is extracted. The extracted segments are combined to generate an information-enhanced spatial-frequency spectrum of the object wave, and after that the formed spatial-frequency spectrum is inversely Fourier transformed. Then the high-frequency information of the reconstructed image is recovered. The effectiveness of the proposed algorithm was verified by a numerical simulation and an experiment.
NASA Astrophysics Data System (ADS)
Schütze, Niels; Wöhling, Thomas; de Play, Michael
2010-05-01
Some real-world optimization problems in water resources have a high-dimensional space of decision variables and more than one objective function. In this work, we compare three general-purpose, multi-objective simulation optimization algorithms, namely NSGA-II, AMALGAM, and CMA-ES-MO when solving three real case Multi-objective Optimization Problems (MOPs): (i) a high-dimensional soil hydraulic parameter estimation problem; (ii) a multipurpose multi-reservoir operation problem; and (iii) a scheduling problem in deficit irrigation. We analyze the behaviour of the three algorithms on these test problems considering their formulations ranging from 40 up to 120 decision variables and 2 to 4 objectives. The computational effort required by each algorithm in order to reach the true Pareto front is also analyzed.
NASA Astrophysics Data System (ADS)
Lu, Lin; Chang, Yunlong; Li, Yingmin; Lu, Ming
2013-05-01
An orthogonal experiment was conducted by the means of multivariate nonlinear regression equation to adjust the influence of external transverse magnetic field and Ar flow rate on welding quality in the process of welding condenser pipe by high-speed argon tungsten-arc welding (TIG for short). The magnetic induction and flow rate of Ar gas were used as optimum variables, and tensile strength of weld was set to objective function on the base of genetic algorithm theory, and then an optimal design was conducted. According to the request of physical production, the optimum variables were restrained. The genetic algorithm in the MATLAB was used for computing. A comparison between optimum results and experiment parameters was made. The results showed that the optimum technologic parameters could be chosen by the means of genetic algorithm with the conditions of excessive optimum variables in the process of high-speed welding. And optimum technologic parameters of welding coincided with experiment results.
Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei
2016-09-26
High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to
Li Hongwei; Li Jinzhong; Helm, Gregory A.; Pan Dongfeng . E-mail: Dongfeng_pan@yahoo.com
2005-09-09
PSA promoter has been demonstrated the utility for tissue-specific toxic gene therapy in prostate cancer models. Characterization of foreign gene overexpression in normal animals elicited by PSA promoter should help evaluate therapy safety. Here we constructed an adenovirus vector (AdPSA-Luc), containing firefly luciferase gene under the control of the 5837 bp long prostate-specific antigen promoter. A charge coupled device video camera was used to non-invasively image expression of firefly luciferase in nude mice on days 3, 7, 11 after injection of 2 x 10{sup 9} PFU of AdPSA-Luc virus via tail vein. The result showed highly specific expression of the luciferase gene in lungs of mice from day 7. The finding indicates the potential limitations of the suicide gene therapy of prostate cancer based on selectivity of PSA promoter. By contrary, it has encouraging implications for further development of vectors via PSA promoter to enable gene therapy for pulmonary diseases.
The Importance of Specific Skills to High School Social Studies Teachers.
ERIC Educational Resources Information Center
Guenther, John
This study determines those specific social studies skills that high school social studies teachers believe students should have developed as a result of their instruction in a high school social studies program, and differences in the importance attached to specific skills between high school social studies teachers classified as having a…
Luu, Van T; Goujon, Jean-Yves; Meisterhans, Christian; Frommherz, Matthias; Bauer, Carsten
2015-05-15
The synthesis of a triple tritiated isotopologue of the CRTh2 antagonist NVP-QAW039 (fevipiprant) with a specific activity >3 TBq/mmol is described. Key to the high specific activity is the methylation of a bench-stable dimeric disulfide precursor that is in situ reduced to the corresponding thiol monomer and methylated with [(3)H3]MeONos having per se a high specific activity. The high specific activity of the tritiated active pharmaceutical ingredient obtained by a build-up approach is discussed in the light of the specific activity usually to be expected if hydrogen tritium exchange methods were applied.
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-01-01
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-04-22
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
An adaptive sampling algorithm for Doppler-shift fluorescence velocimetry in high-speed flows
NASA Astrophysics Data System (ADS)
Le Page, Laurent M.; O'Byrne, Sean
2017-03-01
We present an approach to improving the efficiency of obtaining samples over a given domain for the peak location of Gaussian line-shapes. The method uses parameter estimates obtained from previous measurements to determine subsequent sampling locations. The method may be applied to determine the location of a spectral peak, where the monetary or time cost is too high to allow a less efficient search method, such as sampling at uniformly distributed domain locations, to be used. We demonstrate the algorithm using linear least-squares fitting of log-scaled planar laser-induced fluorescence data combined with Monte-Carlo simulation of measurements, to accurately determine the Doppler-shifted fluorescence peak frequency for each pixel of a fluorescence image. A simulated comparison between this approach and a uniformly spaced sampling approach is carried out using fits both for a single pixel and for a collection of pixels representing the fluorescence images that would be obtained in a hypersonic flow facility. In all cases, the peak location of Doppler-shifted line-shapes were determined to a similar precision with fewer samples than could be achieved using the more typical uniformly distributed sampling approach.
A high reliability detection algorithm for wireless ECG systems based on compressed sensing theory.
Balouchestani, Mohammadreza; Raahemifar, Kaainran; Krishnan, Sridhar
2013-01-01
Wireless Body Area Networks (WBANs) consist of small intelligent biomedical wireless sensors attached on or implanted in the body to collect vital biomedical data from the human body providing Continuous Health Monitoring Systems (CHMS). The WBANs promise to be a key element in wireless electrocardiogram (ECG) systems for next-generation. ECG signals are widely used in health care systems as a noninvasive technique for diagnosis of heart conditions. However, the use of conventional ECG system is restricted by patient's mobility, transmission capacity, and physical size. Aforementioned highlights the need and advantage of wireless ECG systems with low sampling-rate and low power consumption. With this in mind, Compressed Sensing (CS) procedure as a new sampling approach and the collaboration from Shannon Energy Transformation (SET) and Peak Finding Schemes (PFS) is used to provide a robust low-complexity detection algorithm in gateways and access points in the hospitals and medical centers with high probability and enough accuracy. Advanced wireless ECG systems based on our approach will be able to deliver healthcare not only to patients in hospitals and medical centers; but also at their homes and workplaces thus offering cost saving, and improving the quality of life. Our simulation results show an increment of 0.1 % for sensitivity as well as 1.5% for the prediction level and detection accuracy.
Estimating the dimension of high-dimensional attractors: A comparison between two algorithms
NASA Astrophysics Data System (ADS)
Galka, A.; Maaß, T.; Pfister, G.
1998-10-01
We compare two algorithms for the numerical estimation of the correlation dimension from a finite set of vectors: the “classical” algorithm of Grassberger and Procaccia (GPA) and the recently proposed algorithm of Judd (JA). Data set size requirements and their relations to systematic and statistical errors of the estimates are investigated. It is demonstrated that correlation dimensions of the order of 6 can correctly be resolved on the basis of the about 100000 data points in the case of a continuous trajectory on a strange attractor; the minimum data set size is, however, noticeably dependent on the geometrical structure of the system from which the vectors were sampled.
NASA Astrophysics Data System (ADS)
O'Malley, Peter; Babbush, Ryan; Kivlichan, Ian; Romero, Jhonathan; McClean, Jarrod; Tranter, Andrew; Barends, Rami; Kelly, Julian; Chen, Yu; Chen, Zijun; Jeffrey, Evan; Fowler, Austin; Megrant, Anthony; Mutus, Josh; Neill, Charles; Quintana, Christopher; Roushan, Pedram; Sank, Daniel; Vainsencher, Amit; Wenner, James; White, Theodore; Love, Peter; Aspuru-Guzik, Alan; Neven, Hartmut; Martinis, John
Quantum simulations of molecules have the potential to calculate industrially-important chemical parameters beyond the reach of classical methods with relatively modest quantum resources. Recent years have seen dramatic progress both superconducting qubits and quantum chemistry algorithms. Here, we present experimental demonstrations of two fully-scalable algorithms for finding the dissociation energy of hydrogen: the variational quantum eigensolver and iterative phase estimation. This represents the first calculation of a dissociation energy to chemical accuracy with a non-precompiled algorithm. These results show the promise of chemistry as the ``killer app'' for quantum computers, even before the advent of full error-correction.
NASA Astrophysics Data System (ADS)
Goloviznin, V. M.; Kanaev, A. A.
2012-03-01
The CABARET computational algorithm is generalized to one-dimensional scalar quasilinear hyperbolic partial differential equations with allowance for inequality constraints on the solution. This generalization can be used to analyze seepage of liquid radioactive wastes through the unsaturated zone.
A rain pixel recovery algorithm for videos with highly dynamic scenes.
Jie Chen; Lap-Pui Chau
2014-03-01
Rain removal is a very useful and important technique in applications such as security surveillance and movie editing. Several rain removal algorithms have been proposed these years, where photometric, chromatic, and probabilistic properties of the rain have been exploited to detect and remove the rainy effect. Current methods generally work well with light rain and relatively static scenes, when dealing with heavier rainfall in dynamic scenes, these methods give very poor visual results. The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal informations are then adaptively exploited during rain pixel recovery. Results show that the proposed algorithm has a much better performance for rainy scenes with large motion than existing algorithms.
Optimal band selection for high dimensional remote sensing data using genetic algorithm
NASA Astrophysics Data System (ADS)
Zhang, Xianfeng; Sun, Quan; Li, Jonathan
2009-06-01
A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.
Correlation algorithm for computing the velocity fields in microchannel flows with high resolution
NASA Astrophysics Data System (ADS)
Karchevskiy, M. N.; Tokarev, M. P.; Yagodnitsyna, A. A.; Kozinkin, L. A.
2015-11-01
A cross-correlation algorithm, which enables the obtaining of the velocity field in the flow with a spatial resolution up to a single pixel per vector, has been realized in the work. It gives new information about the structure of microflows as well as increases considerably the accuracy of the measurement of the flow velocity field. In addition, the realized algorithm renders information about the velocity fluctuations in the flow structure. The algorithm was tested on synthetic data at a different number of test images the velocity distribution on which was specified by the Siemens star. The experimental validation was done on the data provided within the international project "4th International PIV Challenge". Besides, a detailed comparison with the Particle Image Velocimetry algorithm, which was realized previously, was carried out.
A high efficient and fast kNN algorithm based on CUDA
NASA Astrophysics Data System (ADS)
Pei, Tong; Zhang, Yanxia; Zhao, Yongheng
2010-07-01
The k Nearest Neighbor (kNN) algorithm is an effective classification approach in the statistical methods of pattern recognition. But it could be a rather time-consuming approach when applied on massive data, especially facing large survey projects in astronomy. NVIDIA CUDA is a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA graphics processing units (GPUs) to solve many complex computational problems in a fraction of the time required on a CPU. In this paper, we implement a CUDAbased kNN algorithm, and compare its performance with CPU-only kNN algorithm using single-precision and double-precision datatype on classifying celestial objects. The results demonstrate that CUDA can speedup kNN algorithm effectively and could be useful in astronomical applications.
Stefanescu, Horia; Radu, Corina; Procopet, Bogdan; Lupsor-Platon, Monica; Habic, Alina; Tantau, Marcel; Grigorescu, Mircea
2015-02-01
Liver stiffness (LS), spleen stiffness (SS) and serum markers have been proposed to non-invasively assess portal hypertension or oesophageal varices (EV) in cirrhotic patients. We aimed to evaluate the performance of a stepwise algorithm that combines Lok score with LS and SS for diagnosing high-risk EV (HREV) and to compare it with other already-validated non-invasive methods. We performed a cross-sectional study including 136 consecutive compensated cirrhotic patients with various aetiologies, divided into training (90) and validation (46) set. Endoscopy was performed within 6 months from inclusion for EV screening. Spleen diameter was assessed by ultrasonography. LS and SS were measured using Fibroscan. Lok score, platelet count/spleen diameter ratio, LSM-spleen diameter to platelet ratio score and oesophageal varices risk score (EVRS) were calculated and their diagnostic accuracy for HREV was assessed. The algorithm classified patients as having/not-having HREV. Its performance was tested and compared in both groups. In the training set, all variables could select patients with HREV with moderate accuracy, the best being LSPS (AUROC = 0.818; 0.93 sensitivity; 0.63 specificity). EVRS, however, was the only independent predictor of HREV (OR = 1.521; P = 0.032). The algorithm correctly classified 69 (76.66%) patients in the training set (P < 0.0001) and 36 (78.26%) in the validation one. In the validation group, the algorithm performed slightly better than LSPS and EVRS, showing 100% sensitivity and negative predicted value. The stepwise algorithm combining Lok score, LS and SS could be used to select patients at low risk of having HREV and who may benefit from more distanced endoscopic evaluation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Dragas, Jelena; Jäckel, David; Hierlemann, Andreas; Franke, Felix
2017-01-01
Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989
Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix
2015-03-01
Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.
Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications
2016-10-17
finite element building blocks, including the assembly of the element load vectors and element stiffness matrices . Hierarchic bases have been prevalent...most efficient hierarchic bases currently in use. An algorithm for the construction of the element matrices in optimal complexity for uniform order...constructs the non-uniform order element matrices in optimal complexity. In the final part of this work, we extend the algorithm of [2] to the non-uniform
Hallen, Mark A; Donald, Bruce R
2016-05-01
Practical protein design problems require designing sequences with a combination of affinity, stability, and specificity requirements. Multistate protein design algorithms model multiple structural or binding "states" of a protein to address these requirements. comets provides a new level of versatile, efficient, and provable multistate design. It provably returns the minimum with respect to sequence of any desired linear combination of the energies of multiple protein states, subject to constraints on other linear combinations. Thus, it can target nearly any combination of affinity (to one or multiple ligands), specificity, and stability (for multiple states if needed). Empirical calculations on 52 protein design problems showed comets is far more efficient than the previous state of the art for provable multistate design (exhaustive search over sequences). comets can handle a very wide range of protein flexibility and can enumerate a gap-free list of the best constraint-satisfying sequences in order of objective function value.
Ban, Hiroshi; Yamamoto, Hiroki
2013-05-31
In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free.
White, Nicole; Benton, Miles; Kennedy, Daniel; Fox, Andrew; Griffiths, Lyn; Lea, Rodney; Mengersen, Kerrie
2017-01-01
Cell- and sex-specific differences in DNA methylation are major sources of epigenetic variation in whole blood. Heterogeneity attributable to cell type has motivated the identification of cell-specific methylation at the CpG level, however statistical methods for this purpose have been limited to pairwise comparisons between cell types or between the cell type of interest and whole blood. We developed a Bayesian model selection algorithm for the identification of cell-specific methylation profiles that incorporates knowledge of shared cell lineage and allows for the identification of differential methylation profiles in one or more cell types simultaneously. Under the proposed methodology, sex-specific differences in methylation by cell type are also assessed. Using publicly available, cell-sorted methylation data, we show that 51.3% of female CpG markers and 61.4% of male CpG markers identified were associated with differential methylation in more than one cell type. The impact of cell lineage on differential methylation was also highlighted. An evaluation of sex-specific differences revealed differences in CD56+NK methylation, within both single and multi- cell dependent methylation patterns. Our findings demonstrate the need to account for cell lineage in studies of differential methylation and associated sex effects.
Design and Implementation of High-Speed Input-Queued Switches Based on a Fair Scheduling Algorithm
NASA Astrophysics Data System (ADS)
Hu, Qingsheng; Zhao, Hua-An
To increase both the capacity and the processing speed for input-queued (IQ) switches, we proposed a fair scalable scheduling architecture (FSSA). By employing FSSA comprised of several cascaded sub-schedulers, a large-scale high performance switches or routers can be realized without the capacity limitation of monolithic device. In this paper, we present a fair scheduling algorithm named FSSA_DI based on an improved FSSA where a distributed iteration scheme is employed, the scheduler performance can be improved and the processing time can be reduced as well. Simulation results show that FSSA_DI achieves better performance on average delay and throughput under heavy loads compared to other existing algorithms. Moreover, a practical 64 × 64 FSSA using FSSA_DI algorithm is implemented by four Xilinx Vertex-4 FPGAs. Measurement results show that the data rates of our solution can be up to 800Mbps and the tradeoff between performance and hardware complexity has been solved peacefully.
Zhu, Di; Schubert, Martin F.; Cho, Jaehee; Schubert, E. Fred; Crawford, Mary H.; Koleske, Daniel D.; Shim, Hyunwook; Sone, Cheolsoo
2012-01-01
Light-emitting diodes are becoming the next-generation light source because of their prominent benefits in energy efficiency, versatility, and benign environmental impact. However, because of the unique polarization effects in III–V nitrides and the high complexity of light-emitting diodes, further breakthroughs towards truly optimized devices are required. Here we introduce the concept of artificial evolution into the device optimization process. Reproduction and selection are accomplished by means of an advanced genetic algorithm and device simulator, respectively. We demonstrate that this approach can lead to new device structures that go beyond conventional approaches. The innovative designs originating from the genetic algorithm and the demonstration of the predicted results by implementing structures suggested by the algorithm establish a new avenue for complex semiconductor device design and optimization.
Shekar, Venkateswaran; Shah, Abhi; Shadid, Mohammad; Wu, Jing-Tao; Bolleddula, Jayaprakasam; Chowdhury, Swapan
2016-08-01
Metabolite identification without radiolabeled compound is often challenging because of interference of matrix-related components. A novel and an effective background subtraction algorithm (A-BgS) has been developed to process high-resolution mass spectral data that can selectively remove matrix-related components. The use of a graphics processing unit with a multicore central processing unit enhanced processing speed several 1000-fold compared with a single central processing unit. A-BgS algorithm effectively removes background peaks from the mass spectra of biological matrices as demonstrated by the identification of metabolites of delavirdine and metoclopramide. The A-BgS algorithm is fast, user friendly and provides reliable removal of matrix-related ions from biological samples, and thus can be very helpful in detection and identification of in vivo and in vitro metabolites.
Matson, Charles L; Borelli, Kathy; Jefferies, Stuart; Beckner, Charles C; Hege, E Keith; Lloyd-Hart, Michael
2009-01-01
We report a multiframe blind deconvolution algorithm that we have developed for imaging through the atmosphere. The algorithm has been parallelized to a significant degree for execution on high-performance computers, with an emphasis on distributed-memory systems so that it can be hosted on commodity clusters. As a result, image restorations can be obtained in seconds to minutes. We have compared and quantified the quality of its image restorations relative to the associated Cramér-Rao lower bounds (when they can be calculated). We describe the algorithm and its parallelization in detail, demonstrate the scalability of its parallelization across distributed-memory computer nodes, discuss the results of comparing sample variances of its output to the associated Cramér-Rao lower bounds, and present image restorations obtained by using data collected with ground-based telescopes.
NASA Astrophysics Data System (ADS)
Ghaffarian, Saman; Ghaffarian, Salar
2014-11-01
This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.
NASA Astrophysics Data System (ADS)
Quan, Yu; He, Dawei; Wang, Yongsheng; Wang, Pengfei
2014-08-01
For the benefit of electrical isolation, corrosion resistance and quasi-distributed detecting, Fiber Bragg Grating Sensor has been studied for high-speed railway application progressively. Existing Axle counter system based on fiber Bragg grating sensor isn't appropriate for high-speed railway for the shortcoming of emplacement of fiber Bragg grating sensor, low Sampling rate and un-optimized algorithm for peak searching. We propose a new design for the Axle counter of high-speed railway based on high-speed fiber Bragg grating demodulating system. We also optimized algorithm for peak searching by synthesizing the three sensor data, bringing forward the time axle, Gaussian fitting and Finite Element Analysis. The feasibility was verified by field experiment.
Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca
2007-02-01
Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.
Sreevatsan, Srinand; Bookout, Jack B.; Ringpis, Fidel M.; Pottathil, Mridula R.; Marshall, David J.; De Arruda, Monika; Murvine, Christopher; Fors, Lance; Pottathil, Raveendran M.; Barathur, Raj R.
1998-01-01
This study was designed to analyze the feasibility and validity of using Cleavase Fragment Length Polymorphism (CFLP) analysis as an alternative to DNA sequencing for high-throughput screening of hepatitis C virus (HCV) genotypes in a high-volume molecular pathology laboratory setting. By using a 244-bp amplicon from the 5′ untranslated region of the HCV genome, 61 clinical samples received for HCV reverse transcription-PCR (RT-PCR) were genotyped by this method. The genotype frequencies assigned by the CFLP method were 44.3% for type 1a, 26.2% for 1b, 13.1% for type 2b, and 5% type 3a. The results obtained by nucleotide sequence analysis provided 100% concordance with those obtained by CFLP analysis at the major genotype level, with resolvable differences as to subtype designations for five samples. CFLP analysis-derived HCV genotype frequencies also concurred with the national estimates (N. N. Zein et al., Ann. Intern. Med. 125:634–639, 1996). Reanalysis of 42 of these samples in parallel in a different research laboratory reproduced the CFLP fingerprints for 100% of the samples. Similarly, the major subtype designations for 19 samples subjected to different incubation temperature-time conditions were also 100% reproducible. Comparative cost analysis for genotyping of HCV by line probe assay, CFLP analysis, and automated DNA sequencing indicated that the average cost per amplicon was lowest for CFLP analysis, at $20 (direct costs). On the basis of these findings we propose that CFLP analysis is a robust, sensitive, specific, and an economical method for large-scale screening of HCV-infected patients for alpha interferon-resistant HCV genotypes. The paper describes an algorithm that uses as a reflex test the RT-PCR-based qualitative screening of samples for HCV detection and also addresses genotypes that are ambiguous. PMID:9650932
An evaluation of SEBAL algorithm using high resolution aircraft data acquired during BEAREX07
NASA Astrophysics Data System (ADS)
Paul, G.; Gowda, P. H.; Prasad, V. P.; Howell, T. A.; Staggenborg, S.
2010-12-01
Surface Energy Balance Algorithm for Land (SEBAL) computes spatially distributed surface energy fluxes and evapotranspiration (ET) rates using a combination of empirical and deterministic equations executed in a strictly hierarchical sequence. Over the past decade SEBAL has been tested over various regions and has found its application in solving water resources and irrigation problems. This research combines high resolution remote sensing data and field measurements of the surface radiation and agro-meteorological variables to review various SEBAL steps for mapping ET in the Texas High Plains (THP). High resolution aircraft images (0.5-1.8 m) acquired during the Bushland Evapotranspiration and Agricultural Remote Sensing Experiment 2007 (BEAREX07) conducted at the USDA-ARS Conservation and Production Research Laboratory in Bushland, Texas, was utilized to evaluate the SEBAL. Accuracy of individual relationships and predicted ET were investigated using observed hourly ET rates from 4 large weighing lysimeters, each located at the center of 4.7 ha field. The uniqueness and the strength of this study come from the fact that it evaluates the SEBAL for irrigated and dryland conditions simultaneously with each lysimeter field planted to irrigated forage sorghum, irrigated forage corn, dryland clumped grain sorghum, and dryland row sorghum. Improved coefficients for the local conditions were developed for the computation of roughness length for momentum transport. The decision involved in selection of dry and wet pixels, which essentially determines the partitioning of the available energy between sensible (H) and latent (LE) heat fluxes has been discussed. The difference in roughness length referred to as the kB-1 parameter was modified in the current study. Performance of the SEBAL was evaluated using mean bias error (MBE) and root mean square error (RMSE). An RMSE of ±37.68 W m-2 and ±0.11 mm h-1 was observed for the net radiation and hourly actual ET, respectively
Dam, Tarun K; Cavada, Benildo S; Nagano, Celso S; Rocha, Bruno Am; Benevides, Raquel G; Nascimento, Kyria S; de Sousa, Luiz Ag; Oscarson, Stefan; Brewer, C Fred
2011-07-01
The legume species of Cymbosema roseum of Diocleinae subtribe produce at least two different seed lectins. The present study demonstrates that C. roseum lectin I (CRL I) binds with high affinity to the "core" trimannoside of N-linked oligosaccharides. Cymbosema roseum lectin II (CRL II), on the other hand, binds with high affinity to the blood group H trisaccharide (Fucα1,2Galα1-4GlcNAc-). Thermodynamic and hemagglutination inhibition studies reveal the fine binding specificities of the two lectins. Data obtained with a complete set of monodeoxy analogs of the core trimannoside indicate that CRL I recognizes the 3-, 4- and 6-hydroxyl groups of the α(1,6) Man residue, the 3- and 4-hydroxyl group of the α(1,3) Man residue and the 2- and 4-hydroxyl groups of the central Man residue of the trimannoside. CRL I possesses enhanced affinities for the Man5 oligomannose glycan and a biantennary complex glycan as well as glycoproteins containing high-mannose glycans. On the other hand, CRL II distinguishes the blood group H type II epitope from the Lewis(x), Lewis(y), Lewis(a) and Lewis(b) epitopes. CRL II also distinguishes between blood group H type II and type I trisaccharides. CRL I and CRL II, respectively, possess differences in fine specificities when compared with other reported mannose and fucose recognizing lectins. This is the first report of a mannose-specific lectin (CRL I) and a blood group H type II-specific lectin (CRL II) from seeds of a member of the Diocleinae subtribe.
High-order algorithms for compressible reacting flow with complex chemistry
NASA Astrophysics Data System (ADS)
Emmett, Matthew; Zhang, Weiqun; Bell, John B.
2014-05-01
In this paper we describe a numerical algorithm for integrating the multicomponent, reacting, compressible Navier-Stokes equations, targeted for direct numerical simulation of combustion phenomena. The algorithm addresses two shortcomings of previous methods. First, it incorporates an eighth-order narrow stencil approximation of diffusive terms that reduces the communication compared to existing methods and removes the need to use a filtering algorithm to remove Nyquist frequency oscillations that are not damped with traditional approaches. The methodology also incorporates a multirate temporal integration strategy that provides an efficient mechanism for treating chemical mechanisms that are stiff relative to fluid dynamical time-scales. The overall methodology is eighth order in space with options for fourth order to eighth order in time. The implementation uses a hybrid programming model designed for effective utilisation of many-core architectures. We present numerical results demonstrating the convergence properties of the algorithm with realistic chemical kinetics and illustrating its performance characteristics. We also present a validation example showing that the algorithm matches detailed results obtained with an established low Mach number solver.
A comparison of high precision F0 extraction algorithms for sustained vowels.
Parsa, V; Jamieson, D G
1999-02-01
Perturbation analysis of sustained vowel waveforms is used routinely in the clinical evaluation of pathological voices and in monitoring patient progress during treatment. Accurate estimation of voice fundamental frequency (F0) is essential for accurate perturbation analysis. Several algorithms have been proposed for fundamental frequency extraction. To be appropriate for clinical use, a key consideration is that an F0 extraction algorithm be robust to such extraneous factors as the presence of noise and modulations in voice frequency and amplitude that are commonly associated with the voice pathologies under study. This work examines the performance of seven F0 algorithms, based on the average magnitude difference function (AMDF), the input autocorrelation function (AC), the autocorrelation function of the center-clipped signal (ACC), the autocorrelation function of the inverse filtered signal (IFAC), the signal cepstrum (CEP), the Harmonic Product Spectrum (HPS) of the signal, and the waveform matching function (WM) respectively. These algorithms were evaluated using sustained vowel samples collected from normal and pathological subjects. The effect of background noise and of frequency and amplitude modulations on these algorithms was also investigated, using synthetic vowel waveforms.
Yu, Shanen; Liu, Shuai; Jiang, Peng
2016-01-01
Most existing deployment algorithms for event coverage in underwater wireless sensor networks (UWSNs) usually do not consider that network communication has non-uniform characteristics on three-dimensional underwater environments. Such deployment algorithms ignore that the nodes are distributed at different depths and have different probabilities for data acquisition, thereby leading to imbalances in the overall network energy consumption, decreasing the network performance, and resulting in poor and unreliable late network operation. Therefore, in this study, we proposed an uneven cluster deployment algorithm based network layered for event coverage. First, according to the energy consumption requirement of the communication load at different depths of the underwater network, we obtained the expected value of deployment nodes and the distribution density of each layer network after theoretical analysis and deduction. Afterward, the network is divided into multilayers based on uneven clusters, and the heterogeneous communication radius of nodes can improve the network connectivity rate. The recovery strategy is used to balance the energy consumption of nodes in the cluster and can efficiently reconstruct the network topology, which ensures that the network has a high network coverage and connectivity rate in a long period of data acquisition. Simulation results show that the proposed algorithm improves network reliability and prolongs network lifetime by significantly reducing the blind movement of overall network nodes while maintaining a high network coverage and connectivity rate. PMID:27973448
Peto, Myron; Kloczkowski, Andrzej; Honavar, Vasant; Jernigan, Robert L
2008-01-01
Background By using a standard Support Vector Machine (SVM) with a Sequential Minimal Optimization (SMO) method of training, Naïve Bayes and other machine learning algorithms we are able to distinguish between two classes of protein sequences: those folding to highly-designable conformations, or those folding to poorly- or non-designable conformations. Results First, we generate all possible compact lattice conformations for the specified shape (a hexagon or a triangle) on the 2D triangular lattice. Then we generate all possible binary hydrophobic/polar (H/P) sequences and by using a specified energy function, thread them through all of these compact conformations. If for a given sequence the lowest energy is obtained for a particular lattice conformation we assume that this sequence folds to that conformation. Highly-designable conformations have many H/P sequences folding to them, while poorly-designable conformations have few or no H/P sequences. We classify sequences as folding to either highly – or poorly-designable conformations. We have randomly selected subsets of the sequences belonging to highly-designable and poorly-designable conformations and used them to train several different standard machine learning algorithms. Conclusion By using these machine learning algorithms with ten-fold cross-validation we are able to classify the two classes of sequences with high accuracy – in some cases exceeding 95%. PMID:19014713
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate of $O(n^{-1/2})$, the corresponding IRUQ converges at $O(n^{-1})$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.
An Evaluation of SEBAL Algorithm Using High Resolution Aircraft Data Acquired During BEAREX07
USDA-ARS?s Scientific Manuscript database
Surface Energy Balance Algorithm for Land (SEBAL) computes spatially distributed surface energy fluxes and evapotranspiration (ET) rates using a combination of empirical and deterministic equations executed in a strictly hierarchical sequence. Over the past decade, SEBAL has been tested over various...
GPU-based ray tracing algorithm for high-speed propagation prediction in typical indoor environments
NASA Astrophysics Data System (ADS)
Guo, Lixin; Guan, Xiaowei; Liu, Zhongyu
2015-10-01
A fast 3-D ray tracing propagation prediction model based on virtual source tree is presented in this paper, whose theoretical foundations are geometrical optics(GO) and the uniform theory of diffraction(UTD). In terms of typical single room indoor scene, taking the geometrical and electromagnetic information into account, some acceleration techniques are adopted to raise the efficiency of the ray tracing algorithm. The simulation results indicate that the runtime of the ray tracing algorithm will sharply increase when the number of the objects in the single room is large enough. Therefore, GPU acceleration technology is used to solve that problem. As is known to all, GPU is good at calculation operation rather than logical judgment, so that tens of thousands of threads in CUDA programs are able to calculate at the same time, in order to achieve massively parallel acceleration. Finally, a typical single room with several objects is simulated by using the serial ray tracing algorithm and the parallel one respectively. It can be found easily from the results that compared with the serial algorithm, the GPU-based one can achieve greater efficiency.
Novel speech signal processing algorithms for high-accuracy classification of Parkinson's disease.
Tsanas, Athanasios; Little, Max A; McSharry, Patrick E; Spielman, Jennifer; Ramig, Lorraine O
2012-05-01
There has been considerable recent research into the connection between Parkinson's disease (PD) and speech impairment. Recently, a wide range of speech signal processing algorithms (dysphonia measures) aiming to predict PD symptom severity using speech signals have been introduced. In this paper, we test how accurately these novel algorithms can be used to discriminate PD subjects from healthy controls. In total, we compute 132 dysphonia measures from sustained vowels. Then, we select four parsimonious subsets of these dysphonia measures using four feature selection algorithms, and map these feature subsets to a binary classification response using two statistical classifiers: random forests and support vector machines. We use an existing database consisting of 263 samples from 43 subjects, and demonstrate that these new dysphonia measures can outperform state-of-the-art results, reaching almost 99% overall classification accuracy using only ten dysphonia features. We find that some of the recently proposed dysphonia measures complement existing algorithms in maximizing the ability of the classifiers to discriminate healthy controls from PD subjects. We see these results as an important step toward noninvasive diagnostic decision support in PD.