Science.gov

Sample records for acuros xb algorithm

  1. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-05-01

    This corrigendum intends to clarify some important points that were not clearly or properly addressed in the original paper, and for which the authors apologize. The original description of the first Acuros algorithm is from the developers, published in Physics in Medicine and Biology by Vassiliev et al (2010) in the paper entitled 'Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams'. The main equations describing the algorithm reported in our paper, implemented as the 'Acuros XB Advanced Dose Calculation Algorithm' in the Varian Eclipse treatment planning system, were originally described (for the original Acuros algorithm) in the above mentioned paper by Vassiliev et al. The intention of our description in our paper was to give readers an overview of the algorithm, not pretending to have authorship of the algorithm itself (used as implemented in the planning system). Unfortunately our paper was not clear, particularly in not allocating full credit to the work published by Vassiliev et al on the original Acuros algorithm. Moreover, it is important to clarify that we have not adapted any existing algorithm, but have used the Acuros XB implementation in the Eclipse planning system from Varian. In particular, the original text of our paper should have been as follows: On page 1880 the sentence 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008, 2010). Acuros XB builds upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were adapted especially for external photon beam dose calculations' should be corrected to 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008). A new algorithm called Acuros, developed by the Transpire Inc. group, was

  2. Effect of Acuros XB algorithm on monitor units for stereotactic body radiotherapy planning of lung cancer

    SciTech Connect

    Khan, Rao F. Villarreal-Barajas, Eduardo; Lau, Harold; Liu, Hong-Wei

    2014-04-01

    Stereotactic body radiotherapy (SBRT) is a curative regimen that uses hypofractionated radiation-absorbed dose to achieve a high degree of local control in early stage non–small cell lung cancer (NSCLC). In the presence of heterogeneities, the dose calculation for the lungs becomes challenging. We have evaluated the dosimetric effect of the recently introduced advanced dose-calculation algorithm, Acuros XB (AXB), for SBRT of NSCLC. A total of 97 patients with early-stage lung cancer who underwent SBRT at our cancer center during last 4 years were included. Initial clinical plans were created in Aria Eclipse version 8.9 or prior, using 6 to 10 fields with 6-MV beams, and dose was calculated using the anisotropic analytic algorithm (AAA) as implemented in Eclipse treatment planning system. The clinical plans were recalculated in Aria Eclipse 11.0.21 using both AAA and AXB algorithms. Both sets of plans were normalized to the same prescription point at the center of mass of the target. A secondary monitor unit (MU) calculation was performed using commercial program RadCalc for all of the fields. For the planning target volumes ranging from 19 to 375 cm{sup 3}, a comparison of MUs was performed for both set of algorithms on field and plan basis. In total, variation of MUs for 677 treatment fields was investigated in terms of equivalent depth and the equivalent square of the field. Overall, MUs required by AXB to deliver the prescribed dose are on an average 2% higher than AAA. Using a 2-tailed paired t-test, the MUs from the 2 algorithms were found to be significantly different (p < 0.001). The secondary independent MU calculator RadCalc underestimates the required MUs (on an average by 4% to 5%) in the lung relative to either of the 2 dose algorithms.

  3. Dosimetric impact of Acuros XB deterministic radiation transport algorithm for heterogeneous dose calculation in lung cancer

    SciTech Connect

    Han Tao; Followill, David; Repchak, Roman; Molineu, Andrea; Howell, Rebecca; Salehpour, Mohammad; Mikell, Justin; Mourtada, Firas

    2013-05-15

    Purpose: The novel deterministic radiation transport algorithm, Acuros XB (AXB), has shown great potential for accurate heterogeneous dose calculation. However, the clinical impact between AXB and other currently used algorithms still needs to be elucidated for translation between these algorithms. The purpose of this study was to investigate the impact of AXB for heterogeneous dose calculation in lung cancer for intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The thorax phantom from the Radiological Physics Center (RPC) was used for this study. IMRT and VMAT plans were created for the phantom in the Eclipse 11.0 treatment planning system. Each plan was delivered to the phantom three times using a Varian Clinac iX linear accelerator to ensure reproducibility. Thermoluminescent dosimeters (TLDs) and Gafchromic EBT2 film were placed inside the phantom to measure delivered doses. The measurements were compared with dose calculations from AXB 11.0.21 and the anisotropic analytical algorithm (AAA) 11.0.21. Two dose reporting modes of AXB, dose-to-medium in medium (D{sub m,m}) and dose-to-water in medium (D{sub w,m}), were studied. Point doses, dose profiles, and gamma analysis were used to quantify the agreement between measurements and calculations from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: For the RPC lung phantom, AAA and AXB dose predictions were found in good agreement to TLD and film measurements for both IMRT and VMAT plans. TLD dose predictions were within 0.4%-4.4% to AXB doses (both D{sub m,m} and D{sub w,m}); and within 2.5%-6.4% to AAA doses, respectively. For the film comparisons, the gamma indexes ({+-}3%/3 mm criteria) were 94%, 97%, and 98% for AAA, AXB{sub Dm,m}, and AXB{sub Dw,m}, respectively. The differences between AXB and AAA in dose-volume histogram mean doses were within 2% in the planning target volume, lung, heart, and within 5% in the spinal cord

  4. Dosimetric impact of Acuros XB deterministic radiation transport algorithm for heterogeneous dose calculation in lung cancer

    PubMed Central

    Han, Tao; Followill, David; Mikell, Justin; Repchak, Roman; Molineu, Andrea; Howell, Rebecca; Salehpour, Mohammad; Mourtada, Firas

    2013-01-01

    Purpose: The novel deterministic radiation transport algorithm, Acuros XB (AXB), has shown great potential for accurate heterogeneous dose calculation. However, the clinical impact between AXB and other currently used algorithms still needs to be elucidated for translation between these algorithms. The purpose of this study was to investigate the impact of AXB for heterogeneous dose calculation in lung cancer for intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The thorax phantom from the Radiological Physics Center (RPC) was used for this study. IMRT and VMAT plans were created for the phantom in the Eclipse 11.0 treatment planning system. Each plan was delivered to the phantom three times using a Varian Clinac iX linear accelerator to ensure reproducibility. Thermoluminescent dosimeters (TLDs) and Gafchromic EBT2 film were placed inside the phantom to measure delivered doses. The measurements were compared with dose calculations from AXB 11.0.21 and the anisotropic analytical algorithm (AAA) 11.0.21. Two dose reporting modes of AXB, dose-to-medium in medium (Dm,m) and dose-to-water in medium (Dw,m), were studied. Point doses, dose profiles, and gamma analysis were used to quantify the agreement between measurements and calculations from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: For the RPC lung phantom, AAA and AXB dose predictions were found in good agreement to TLD and film measurements for both IMRT and VMAT plans. TLD dose predictions were within 0.4%–4.4% to AXB doses (both Dm,m and Dw,m); and within 2.5%–6.4% to AAA doses, respectively. For the film comparisons, the gamma indexes (±3%/3 mm criteria) were 94%, 97%, and 98% for AAA, AXB_Dm,m, and AXB_Dw,m, respectively. The differences between AXB and AAA in dose–volume histogram mean doses were within 2% in the planning target volume, lung, heart, and within 5% in the spinal cord. However, differences up to 8

  5. The accuracy of Acuros XB algorithm for radiation beams traversing a metallic hip implant - comparison with measurements and Monte Carlo calculations.

    PubMed

    Ojala, Jarkko; Kapanen, Mika; Sipilä, Petri; Hyödynmaa, Simo; Pitkänen, Maunu

    2014-01-01

    In this study, the clinical benefit of the improved accuracy of the Acuros XB (AXB) algorithm, implemented in a commercial radiotherapy treatment planning system (TPS), Varian Eclipse, was demonstrated with beams traversing a high-Z material. This is also the first study assessing the accuracy of the AXB algorithm applying volumetric modulated arc therapy (VMAT) technique compared to full Monte Carlo (MC) simulations. In the first phase the AXB algorithm was benchmarked against point dosimetry, film dosimetry, and full MC calculation in a water-filled anthropometric phantom with a unilateral hip implant. Also the validity of the full MC calculation used as reference method was demonstrated. The dose calculations were performed both in original computed tomography (CT) dataset, which included artifacts, and in corrected CT dataset, where constant Hounsfield unit (HU) value assignment for all the materials was made. In the second phase, a clinical treatment plan was prepared for a prostate cancer patient with a unilateral hip implant. The plan applied a hybrid VMAT technique that included partial arcs that avoided passing through the implant and static beams traversing the implant. Ultimately, the AXB-calculated dose distribution was compared to the recalculation by the full MC simulation to assess the accuracy of the AXB algorithm in clinical setting. A recalculation with the anisotropic analytical algorithm (AAA) was also performed to quantify the benefit of the improved dose calculation accuracy of type 'c' algorithm (AXB) over type 'b' algorithm (AAA). The agreement between the AXB algorithm and the full MC model was very good inside and in the vicinity of the implant and elsewhere, which verifies the accuracy of the AXB algorithm for patient plans with beams traversing through high-Z material, whereas the AAA produced larger discrepancies. PMID:25207577

  6. SU-E-T-313: The Accuracy of the Acuros XB Advanced Dose Calculation Algorithm for IMRT Dose Distributions in Head and Neck

    SciTech Connect

    Araki, F; Onizuka, R; Ohno, T; Tomiyama, Y; Hioki, K

    2014-06-01

    Purpose: To investigate the accuracy of the Acuros XB version 11 (AXB11) advanced dose calculation algorithm by comparing with Monte Caro (MC) calculations. The comparisons were performed with dose distributions for a virtual inhomogeneity phantom and intensity-modulated radiotherapy (IMRT) in head and neck. Methods: Recently, AXB based on Linear Boltzmann Transport Equation has been installed in the Eclipse treatment planning system (Varian Medical Oncology System, USA). The dose calculation accuracy of AXB11 was tested by the EGSnrc-MC calculations. In additions, AXB version 10 (AXB10) and Analytical Anisotropic Algorithm (AAA) were also used. First the accuracy of an inhomogeneity correction for AXB and AAA algorithms was evaluated by comparing with MC-calculated dose distributions for a virtual inhomogeneity phantom that includes water, bone, air, adipose, muscle, and aluminum. Next the IMRT dose distributions for head and neck were compared with the AXB and AAA algorithms and MC by means of dose volume histograms and three dimensional gamma analysis for each structure (CTV, OAR, etc.). Results: For dose distributions with the virtual inhomogeneity phantom, AXB was in good agreement with those of MC, except the dose in air region. The dose in air region decreased in order of MCalgorithms, ie: 0.700 MeV for MC, 0.711 MeV for AXB11, and 1.011 MeV for AXB 10. Since the AAA algorithm is based on the dose kernel of water, the doses in regions for air, bone, and aluminum considerably became higher than those of AXB and MC. The pass rates of the gamma analysis for IMRT dose distributions in head and neck were similar to those of MC in order of AXB11

  7. SU-E-T-67: Clinical Implementation and Evaluation of the Acuros Dose Calculation Algorithm

    SciTech Connect

    Yan, C; Combine, T; Dickens, K; Wynn, R; Pavord, D; Huq, M

    2014-06-01

    Purpose: The main aim of the current study is to present a detailed description of the implementation of the Acuros XB Dose Calculation Algorithm, and subsequently evaluate its clinical impacts by comparing it with AAA algorithm. Methods: The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were evaluated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6cm × 6cm to 40cm × 40cm. Central axis and off-axis points with different depths were chosen for the comparison. Similarly, wedge fields with wedge angles from 15 to 60 degree were used. In addition, variable field sizes for a heterogeneous phantom were used to evaluate the Acuros algorithm. Finally, both Acuros and AAA were tested on VMAT patient plans for various sites. Does distributions and calculation time were compared. Results: On average, computation time is reduced by at least 50% by Acuros XB compared with AAA on single fields and VMAT plans. When used for open 6MV photon beams on homogeneous water phantom, both Acuros XB and AAA calculated doses were within 1% of measurement. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. When heterogeneous phantom was used, Acuros XB also improved on accuracy. Conclusion: Compared with AAA, Acuros XB can improve accuracy while significantly reduce computation time for VMAT plans.

  8. Dosimetric Impact of Using the Acuros XB Algorithm for Intensity Modulated Radiation Therapy and RapidArc Planning in Nasopharyngeal Carcinomas

    SciTech Connect

    Kan, Monica W.K.; Leung, Lucullus H.T.; Yu, Peter K.N.

    2013-01-01

    Purpose: To assess the dosimetric implications for the intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy with RapidArc (RA) of nasopharyngeal carcinomas (NPC) due to the use of the Acuros XB (AXB) algorithm versus the anisotropic analytical algorithm (AAA). Methods and Materials: Nine-field sliding window IMRT and triple-arc RA plans produced for 12 patients with NPC using AAA were recalculated using AXB. The dose distributions to multiple planning target volumes (PTVs) with different prescribed doses and critical organs were compared. The PTVs were separated into components in bone, air, and tissue. The change of doses by AXB due to air and bone, and the variation of the amount of dose changes with number of fields was also studied using simple geometric phantoms. Results: Using AXB instead of AAA, the averaged mean dose to PTV{sub 70} (70 Gy was prescribed to PTV{sub 70}) was found to be 0.9% and 1.2% lower for IMRT and RA, respectively. It was approximately 1% lower in tissue, 2% lower in bone, and 1% higher in air. The averaged minimum dose to PTV{sub 70} in bone was approximately 4% lower for both IMRT and RA, whereas it was approximately 1.5% lower for PTV{sub 70} in tissue. The decrease in target doses estimated by AXB was mostly contributed from the presence of bone, less from tissue, and none from air. A similar trend was observed for PTV{sub 60} (60 Gy was prescribed to PTV{sub 60}). The doses to most serial organs were found to be 1% to 3% lower and to other organs 4% to 10% lower for both techniques. Conclusions: The use of the AXB algorithm is highly recommended for IMRT and RapidArc planning for NPC cases.

  9. Experimental validation of deterministic Acuros XB algorithm for IMRT and VMAT dose calculations with the Radiological Physics Center's head and neck phantom

    SciTech Connect

    Han Tao; Mourtada, Firas; Kisling, Kelly; Mikell, Justin; Followill, David; Howell, Rebecca

    2012-04-15

    Purpose: The purpose of this study was to verify the dosimetric performance of Acuros XB (AXB), a grid-based Boltzmann solver, in intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The Radiological Physics Center (RPC) head and neck (H and N) phantom was used for all calculations and measurements in this study. Clinically equivalent IMRT and VMAT plans were created on the RPC H and N phantom in the Eclipse treatment planning system (version 10.0) by using RPC dose prescription specifications. The dose distributions were calculated with two different algorithms, AXB 11.0.03 and anisotropic analytical algorithm (AAA) 10.0.24. Two dose report modes of AXB were recorded: dose-to-medium in medium (D{sub m,m}) and dose-to-water in medium (D{sub w,m}). Each treatment plan was delivered to the RPC phantom three times for reproducibility by using a Varian Clinac iX linear accelerator. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic registered EBT2 film, respectively. Profile comparison and 2D gamma analysis were used to quantify the agreement between the film measurements and the calculated dose distributions from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: Good agreement was observed between measured doses and those calculated with AAA or AXB. Both AAA and AXB calculated doses within 5% of TLD measurements in both the IMRT and VMAT plans. Results of AXB{sub Dm,m} (0.1% to 3.6%) were slightly better than AAA (0.2% to 4.6%) or AXB{sub Dw,m} (0.3% to 5.1%). The gamma analysis for both AAA and AXB met the RPC 7%/4 mm criteria (over 90% passed), whereas AXB{sub Dm,m} met 5%/3 mm criteria in most cases. AAA was 2 to 3 times faster than AXB for IMRT, whereas AXB was 4-6 times faster than AAA for VMAT. Conclusions: AXB was found to be satisfactorily accurate when compared to measurements in the RPC H and N phantom. Compared with AAA

  10. Dosimetric accuracy and clinical quality of Acuros XB and AAA dose calculation algorithm for stereotactic and conventional lung volumetric modulated arc therapy plans

    PubMed Central

    2013-01-01

    Introduction The main aim of the current study was to assess the dosimetric accuracy and clinical quality of volumetric modulated arc therapy (VMAT) plans for stereotactic (stage I) and conventional (stage III) lung cancer treatments planned with Eclipse version 10.0 Anisotropic Analytical Algorithm (AAA) and Acuros XB (AXB) algorithm. Methods The dosimetric impact of using AAA instead of AXB, and grid size 2.5 mm instead of 1.0 mm for VMAT treatment plans was evaluated. The clinical plan quality of AXB VMAT was assessed using 45 stage I and 73 stage III patients, and was compared with published results, planned with VMAT and hybrid-VMAT techniques. Results The dosimetric impact on near-minimum PTV dose (D98%) using AAA instead of AXB was large (underdose up to 12.3%) for stage I and very small (underdose up to 0.8%) for stage III lung treatments. There were no significant differences for dose volume histogram (DVH) values between grid sizes. The calculation time was significantly higher for AXB grid size 1.0 than 2.5 mm (p < 0.01). The clinical quality of the VMAT plans was at least comparable with clinical qualities given in literature of lung treatment plans with VMAT and hybrid-VMAT techniques. The average mean lung dose (MLD), lung V20Gy and V5Gy in this study were respectively 3.6 Gy, 4.1% and 15.7% for 45 stage I patients and 12.4 Gy, 19.3% and 46.6% for 73 stage III lung patients. The average contra-lateral lung dose V5Gy-cont was 35.6% for stage III patients. Conclusions For stereotactic and conventional lung treatments, VMAT calculated with AXB grid size 2.5 mm resulted in accurate dose calculations. No hybrid technique was needed to obtain the dose constraints. AXB is recommended instead of AAA for avoiding serious overestimation of the minimum target doses compared to the actual delivered dose. PMID:23800024

  11. Experimental validation of deterministic Acuros XB algorithm for IMRT and VMAT dose calculations with the Radiological Physics Center’s head and neck phantom

    PubMed Central

    Han, Tao; Mourtada, Firas; Kisling, Kelly; Mikell, Justin; Followill, David; Howell, Rebecca

    2012-01-01

    Purpose: The purpose of this study was to verify the dosimetric performance of Acuros XB (AXB), a grid-based Boltzmann solver, in intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The Radiological Physics Center (RPC) head and neck (H&N) phantom was used for all calculations and measurements in this study. Clinically equivalent IMRT and VMAT plans were created on the RPC H&N phantom in the Eclipse treatment planning system (version 10.0) by using RPC dose prescription specifications. The dose distributions were calculated with two different algorithms, AXB 11.0.03 and anisotropic analytical algorithm (AAA) 10.0.24. Two dose report modes of AXB were recorded: dose-to-medium in medium (Dm,m) and dose-to-water in medium (Dw,m). Each treatment plan was delivered to the RPC phantom three times for reproducibility by using a Varian Clinac iX linear accelerator. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic® EBT2 film, respectively. Profile comparison and 2D gamma analysis were used to quantify the agreement between the film measurements and the calculated dose distributions from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: Good agreement was observed between measured doses and those calculated with AAA or AXB. Both AAA and AXB calculated doses within 5% of TLD measurements in both the IMRT and VMAT plans. Results of AXB_Dm,m (0.1% to 3.6%) were slightly better than AAA (0.2% to 4.6%) or AXB_Dw,m (0.3% to 5.1%). The gamma analysis for both AAA and AXB met the RPC 7%/4 mm criteria (over 90% passed), whereas AXB_Dm,m met 5%/3 mm criteria in most cases. AAA was 2 to 3 times faster than AXB for IMRT, whereas AXB was 4–6 times faster than AAA for VMAT. Conclusions: AXB was found to be satisfactorily accurate when compared to measurements in the RPC H&N phantom. Compared with AAA, AXB results were equal to or better than those

  12. SU-E-T-481: Dosimetric Comparison of Acuros XB and Anisotropic Analytic Algorithm with Commercial Monte Carlo Based Dose Calculation Algorithm for Stereotactic Body Radiation Therapy of Lung Cancer

    SciTech Connect

    Cao, M; Tenn, S; Lee, C; Yang, Y; Lamb, J; Agazaryan, N; Lee, P; Low, D

    2014-06-01

    Purpose: To evaluate performance of three commercially available treatment planning systems for stereotactic body radiation therapy (SBRT) of lung cancer using the following algorithms: Boltzmann transport equation based algorithm (AcurosXB AXB), convolution based algorithm Anisotropic Analytic Algorithm (AAA); and Monte Carlo based algorithm (XVMC). Methods: A total of 10 patients with early stage non-small cell peripheral lung cancer were included. The initial clinical plans were generated using the XVMC based treatment planning system with a prescription of 54Gy in 3 fractions following RTOG0613 protocol. The plans were recalculated with the same beam parameters and monitor units using AAA and AXB algorithms. A calculation grid size of 2mm was used for all algorithms. The dose distribution, conformity, and dosimetric parameters for the targets and organs at risk (OAR) are compared between the algorithms. Results: The average PTV volume was 19.6mL (range 4.2–47.2mL). The volume of PTV covered by the prescribed dose (PTV-V100) were 93.97±2.00%, 95.07±2.07% and 95.10±2.97% for XVMC, AXB and AAA algorithms, respectively. There was no significant difference in high dose conformity index; however, XVMC predicted slightly higher values (p=0.04) for the ratio of 50% prescription isodose volume to PTV (R50%). The percentage volume of total lungs receiving dose >20Gy (LungV20Gy) were 4.03±2.26%, 3.86±2.22% and 3.85±2.21% for XVMC, AXB and AAA algorithms. Examination of dose volume histograms (DVH) revealed small differences in targets and OARs for most patients. However, the AAA algorithm was found to predict considerable higher PTV coverage compared with AXB and XVMC algorithms in two cases. The dose difference was found to be primarily located at the periphery region of the target. Conclusion: For clinical SBRT lung treatment planning, the dosimetric differences between three commercially available algorithms are generally small except at target periphery. XVMC

  13. Difference in dose-volumetric data between the analytical anisotropic algorithm, the dose-to-medium, and the dose-to-water reporting modes of the Acuros XB for lung stereotactic body radiation therapy.

    PubMed

    Mampuya, Wambaka A; Nakamura, Mitsuhiro; Hirose, Yoshinori; Kitsuda, Kenji; Ishigaki, Takashi; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    The purpose of this study was to evaluate the difference in dose-volumetric data between the analytical anisotropic algorithms (AAA) and the two dose reporting modes of the Acuros XB, namely, the dose to water (AXB_Dw) and dose to medium (AXB_Dm) in lung stereotactic body radiotherapy (SBRT). Thirty-eight plans were generated using the AXB_Dm in Eclipse Treatment Planning System (TPS) and then recalculated with the AXB_Dw and AAA, using identical beam setup. A dose of 50 Gy in 4 fractions was prescribed to the isocenter and the planning target volume (PTV) D95%. The isocenter was always inside the PTV. The following dose-volumetric parameters were evaluated; D2%, D50%, D95%, and D98% for the internal target volume (ITV) and the PTV. Two-tailed paired Student's t-tests determined the statistical significance. Although for most of the parameters evaluated, the mean differences observed between the AAA, AXB_Dm, and AXB_Dw were statistically significant (p < 0.05), absolute differences were rather small, in general less than 5% points. The maximum mean difference was observed in the ITV D50% between the AXB_Dm and the AAA and was 1.7% points under the isocenter prescription and 3.3% points under the D95 prescription. AXB_Dm produced higher values than AXB_Dw with differences ranging from 0.4 to 1.1% points under isocenter prescription and 0.0 to 0.7% points under the PTV D95% prescription. The differences observed under the PTV D95% prescription were larger compared to those observed for the isocenter prescription between AXB_Dm and AAA, AXB_Dm and AXB_Dw, and AXB_Dw and AAA. Although statistically significant, the mean differences between the three algorithms are within 3.3% points. PMID:27685138

  14. Difference in dose-volumetric data between the analytical anisotropic algorithm, the dose-to-medium, and the dose-to-water reporting modes of the Acuros XB for lung stereotactic body radiation therapy.

    PubMed

    Mampuya, Wambaka A; Nakamura, Mitsuhiro; Hirose, Yoshinori; Kitsuda, Kenji; Ishigaki, Takashi; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    The purpose of this study was to evaluate the difference in dose-volumetric data between the analytical anisotropic algorithms (AAA) and the two dose reporting modes of the Acuros XB, namely, the dose to water (AXB_Dw) and dose to medium (AXB_Dm) in lung stereotactic body radiotherapy (SBRT). Thirty-eight plans were generated using the AXB_Dm in Eclipse Treatment Planning System (TPS) and then recalculated with the AXB_Dw and AAA, using identical beam setup. A dose of 50 Gy in 4 fractions was prescribed to the isocenter and the planning target volume (PTV) D95%. The isocenter was always inside the PTV. The following dose-volumetric parameters were evaluated; D2%, D50%, D95%, and D98% for the internal target volume (ITV) and the PTV. Two-tailed paired Student's t-tests determined the statistical significance. Although for most of the parameters evaluated, the mean differences observed between the AAA, AXB_Dm, and AXB_Dw were statistically significant (p < 0.05), absolute differences were rather small, in general less than 5% points. The maximum mean difference was observed in the ITV D50% between the AXB_Dm and the AAA and was 1.7% points under the isocenter prescription and 3.3% points under the D95 prescription. AXB_Dm produced higher values than AXB_Dw with differences ranging from 0.4 to 1.1% points under isocenter prescription and 0.0 to 0.7% points under the PTV D95% prescription. The differences observed under the PTV D95% prescription were larger compared to those observed for the isocenter prescription between AXB_Dm and AAA, AXB_Dm and AXB_Dw, and AXB_Dw and AAA. Although statistically significant, the mean differences between the three algorithms are within 3.3% points.

  15. From AAA to Acuros XB-clinical implications of selecting either Acuros XB dose-to-water or dose-to-medium.

    PubMed

    Zifodya, Jackson M; Challens, Cameron H C; Hsieh, Wen-Long

    2016-06-01

    When implementing Acuros XB (AXB) as a substitute for anisotropic analytic algorithm (AAA) in the Eclipse Treatment Planning System, one is faced with a dilemma of reporting either dose to medium, AXB-Dm or dose to water, AXB-Dw. To assist with decision making on selecting either AXB-Dm or AXB-Dw for dose reporting, a retrospective study of treated patients for head & neck (H&N), prostate, breast and lung is presented. Ten patients, previously treated using AAA plans, were selected for each site and re-planned with AXB-Dm and AXB-Dw. Re-planning was done with fixed monitor units (MU) as well as non-fixed MUs. Dose volume histograms (DVH) of targets and organs at risk (OAR), were analyzed in conjunction with ICRU-83 recommended dose reporting metrics. Additionally, comparisons of plan homogeneity indices (HI) and MUs were done to further highlight the differences between the algorithms. Results showed that, on average AAA overestimated dose to the target volume and OARs by less than 2.0 %. Comparisons between AXB-Dw and AXB-Dm, for all sites, also showed overall dose differences to be small (<1.5 %). However, in non-water biological media, dose differences between AXB-Dw and AXB-Dm, as large as 4.6 % were observed. AXB-Dw also tended to have unexpectedly high 3D maximum dose values (>135 % of prescription dose) for target volumes with high density materials. Homogeneity indices showed that AAA planning and optimization templates would need to be adjusted only for the H&N and Lung sites. MU comparison showed insignificant differences between AXB-Dw relative to AAA and between AXB-Dw relative to AXB-Dm. However AXB-Dm MUs relative to AAA, showed an average difference of about 1.3 % signifying an underdosage by AAA. In conclusion, when dose is reported as AXB-Dw, the effect that high density structures in the PTV has on the dose distribution should be carefully considered. As the results show overall small dose differences between the algorithms, when

  16. From AAA to Acuros XB-clinical implications of selecting either Acuros XB dose-to-water or dose-to-medium.

    PubMed

    Zifodya, Jackson M; Challens, Cameron H C; Hsieh, Wen-Long

    2016-06-01

    When implementing Acuros XB (AXB) as a substitute for anisotropic analytic algorithm (AAA) in the Eclipse Treatment Planning System, one is faced with a dilemma of reporting either dose to medium, AXB-Dm or dose to water, AXB-Dw. To assist with decision making on selecting either AXB-Dm or AXB-Dw for dose reporting, a retrospective study of treated patients for head & neck (H&N), prostate, breast and lung is presented. Ten patients, previously treated using AAA plans, were selected for each site and re-planned with AXB-Dm and AXB-Dw. Re-planning was done with fixed monitor units (MU) as well as non-fixed MUs. Dose volume histograms (DVH) of targets and organs at risk (OAR), were analyzed in conjunction with ICRU-83 recommended dose reporting metrics. Additionally, comparisons of plan homogeneity indices (HI) and MUs were done to further highlight the differences between the algorithms. Results showed that, on average AAA overestimated dose to the target volume and OARs by less than 2.0 %. Comparisons between AXB-Dw and AXB-Dm, for all sites, also showed overall dose differences to be small (<1.5 %). However, in non-water biological media, dose differences between AXB-Dw and AXB-Dm, as large as 4.6 % were observed. AXB-Dw also tended to have unexpectedly high 3D maximum dose values (>135 % of prescription dose) for target volumes with high density materials. Homogeneity indices showed that AAA planning and optimization templates would need to be adjusted only for the H&N and Lung sites. MU comparison showed insignificant differences between AXB-Dw relative to AAA and between AXB-Dw relative to AXB-Dm. However AXB-Dm MUs relative to AAA, showed an average difference of about 1.3 % signifying an underdosage by AAA. In conclusion, when dose is reported as AXB-Dw, the effect that high density structures in the PTV has on the dose distribution should be carefully considered. As the results show overall small dose differences between the algorithms, when

  17. Dosimetric comparison of Acuros XB, AAA, and XVMC in stereotactic body radiotherapy for lung cancer

    SciTech Connect

    Tsuruta, Yusuke; Nakata, Manabu; Higashimura, Kyoji; Nakamura, Mitsuhiro Matsuo, Yukinori; Monzen, Hajime; Mizowaki, Takashi; Hiraoka, Masahiro

    2014-08-15

    Purpose: To compare the dosimetric performance of Acuros XB (AXB), anisotropic analytical algorithm (AAA), and x-ray voxel Monte Carlo (XVMC) in heterogeneous phantoms and lung stereotactic body radiotherapy (SBRT) plans. Methods: Water- and lung-equivalent phantoms were combined to evaluate the percentage depth dose and dose profile. The radiation treatment machine Novalis (BrainLab AG, Feldkirchen, Germany) with an x-ray beam energy of 6 MV was used to calculate the doses in the composite phantom at a source-to-surface distance of 100 cm with a gantry angle of 0°. Subsequently, the clinical lung SBRT plans for the 26 consecutive patients were transferred from the iPlan (ver. 4.1; BrainLab AG) to the Eclipse treatment planning systems (ver. 11.0.3; Varian Medical Systems, Palo Alto, CA). The doses were then recalculated with AXB and AAA while maintaining the XVMC-calculated monitor units and beam arrangement. Then the dose-volumetric data obtained using the three different radiation dose calculation algorithms were compared. Results: The results from AXB and XVMC agreed with measurements within ±3.0% for the lung-equivalent phantom with a 6 × 6 cm{sup 2} field size, whereas AAA values were higher than measurements in the heterogeneous zone and near the boundary, with the greatest difference being 4.1%. AXB and XVMC agreed well with measurements in terms of the profile shape at the boundary of the heterogeneous zone. For the lung SBRT plans, AXB yielded lower values than XVMC in terms of the maximum doses of ITV and PTV; however, the differences were within ±3.0%. In addition to the dose-volumetric data, the dose distribution analysis showed that AXB yielded dose distribution calculations that were closer to those with XVMC than did AAA. Means ± standard deviation of the computation time was 221.6 ± 53.1 s (range, 124–358 s), 66.1 ± 16.0 s (range, 42–94 s), and 6.7 ± 1.1 s (range, 5–9 s) for XVMC, AXB, and AAA, respectively. Conclusions: In the

  18. SU-E-T-137: Dosimetric Validation for Pinnacle, Acuros, AAA, and Brainlab Algorithms with Induced Inhomogenieties

    SciTech Connect

    Lopez, P; Tambasco, M; LaFontaine, R; Burns, L

    2014-06-01

    Purpose: To compare the dosimetric accuracy of the Eclipse 11.0 Acuros XB and Anisotropic Analytical Algorithm (AAA), Pinnacle-3 9.2 Collapsed Cone Convolution, and the iPlan 4.1 Monte Carlo (MC) and Pencil Beam (PB) algorithms using measurement as the gold standard. Methods: Ion chamber and diode measurements were taken for 6, 10, and 18 MV beams in a phantom made up of slab densities corresponding to solid water, lung, and bone. The phantom was setup at source-to-surface distance of 100 cm, and the field sizes were 3.0 × 3.0, 5.0 × 5.0, and 10.0 × 10.0 cm2. Data from the planning systems were computed along the central axis of the beam. The measurements were taken using a pinpoint chamber and edge diode for interface regions. Results: The best agreement between data from the algorithms and our measurements occurs away from the slab interfaces. For the 6 MV beam, iPlan 4.1 MC software performs the best with 1.7% absolute average percent difference from measurement. For the 10 MV beam, iPlan 4.1 PB performs the best with 2.7% absolute average percent difference from measurement. For the 18 MV beam, Acuros performs the best with 2.0% absolute average percent difference from measurement. It is interesting to note that the steepest drop in dose occurred the at lung heterogeneity-solid water interface of the18 MV, 3.0 × 3.0 cm2 field size setup. In this situation, Acuros and AAA performed best with an average percent difference within −1.1% of measurement, followed by iPlan 4.1 MC, which was within 4.9%. Conclusion: This study shows that all of the algorithms perform reasonably well in computing dose in a heterogeneous slab phantom. Moreover, Acuros and AAA perform particularly well at the lung-solid water interfaces for higher energy beams and small field sizes.

  19. Experimental verification of the Acuros XB and AAA dose calculation adjacent to heterogeneous media for IMRT and RapidArc of nasopharygeal carcinoma

    SciTech Connect

    Kan, Monica W. K.; Leung, Lucullus H. T.; So, Ronald W. K.; Yu, Peter K. N.

    2013-03-15

    Purpose: To compare the doses calculated by the Acuros XB (AXB) algorithm and analytical anisotropic algorithm (AAA) with experimentally measured data adjacent to and within heterogeneous medium using intensity modulated radiation therapy (IMRT) and RapidArc{sup Registered-Sign} (RA) volumetric arc therapy plans for nasopharygeal carcinoma (NPC). Methods: Two-dimensional dose distribution immediately adjacent to both air and bone inserts of a rectangular tissue equivalent phantom irradiated using IMRT and RA plans for NPC cases were measured with GafChromic{sup Registered-Sign} EBT3 films. Doses near and within the nasopharygeal (NP) region of an anthropomorphic phantom containing heterogeneous medium were also measured with thermoluminescent dosimeters (TLD) and EBT3 films. The measured data were then compared with the data calculated by AAA and AXB. For AXB, dose calculations were performed using both dose-to-medium (AXB{sub Dm}) and dose-to-water (AXB{sub Dw}) options. Furthermore, target dose differences between AAA and AXB were analyzed for the corresponding real patients. The comparison of real patient plans was performed by stratifying the targets into components of different densities, including tissue, bone, and air. Results: For the verification of planar dose distribution adjacent to air and bone using the rectangular phantom, the percentages of pixels that passed the gamma analysis with the {+-} 3%/3mm criteria were 98.7%, 99.5%, and 97.7% on the axial plane for AAA, AXB{sub Dm}, and AXB{sub Dw}, respectively, averaged over all IMRT and RA plans, while they were 97.6%, 98.2%, and 97.7%, respectively, on the coronal plane. For the verification of planar dose distribution within the NP region of the anthropomorphic phantom, the percentages of pixels that passed the gamma analysis with the {+-} 3%/3mm criteria were 95.1%, 91.3%, and 99.0% for AAA, AXB{sub Dm}, and AXB{sub Dw}, respectively, averaged over all IMRT and RA plans. Within the NP region where

  20. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    SciTech Connect

    Soh, R; Lee, J; Harianto, F

    2014-06-01

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute material for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.

  1. Dosimetric comparison of Acuros XB deterministic radiation transport method with Monte Carlo and model-based convolution methods in heterogeneous media

    PubMed Central

    Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas

    2011-01-01

    Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10

  2. XB-70A_flight

    NASA Video Gallery

    During the 1960s, XB-70 was the world's largest experimental aircraft. Capable of flight at speeds of three times the speed of sound (2,000 miles per hour) at altitudes of 70,000 feet, the XB-70 wa...

  3. XB-70A_takeoff

    NASA Video Gallery

    During the 1960s, XB-70 was the world's largest experimental aircraft. Capable of flight at speeds of three times the speed of sound (2,000 miles per hour) at altitudes of 70,000 feet, the XB-70 wa...

  4. Going the distance: validation of Acuros and AAA at an extended SSD of 400 cm.

    PubMed

    Lamichhane, Narottam; Patel, Vivek N; Studenski, Matthew T

    2016-01-01

    Accurate dose calculation and treatment delivery is essential for total body irradiation (TBI). In an effort to verify the accuracy of TBI dose calculation at our institution, we evaluated both the Varian Eclipse AAA and Acuros algorithms to predict dose distributions at an extended source-to-surface distance (SSD) of 400 cm. Measurements were compared to calculated values for a 6 MV beam in physical and virtual phantoms at 400 cm SSD using open beams for both 5 × 5 and 40 × 40cm2 field sizes. Inline and crossline profiles were acquired at equivalent depths of 5 cm, 10 cm, and 20 cm. Depth-dose curves were acquired using EBT2 film and an ion chamber for both field sizes. Finally, a RANDO phantom was used to simulate an actual TBI treatment. At this extended SSD, care must be taken using the planning system as there is good relative agreement between measured and calculated profiles for both algorithms, but there are deviations in terms of the absolute dose. Acuros has better agreement than AAA in the penumbra region. PMID:27074473

  5. XB-70A #1 cockpit

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Photo of the XB-70 #1 cockpit, which shows the complexity of this mid-1960s research aircraft. On the left and right sides of the picture are the pilot's and co-pilot's control yokes. Forward of these, on the cockpit floor, are the rudder pedals with the NAA (North American Aviation) trademark. Between them is the center console. Visible are the six throttles for the XB-70's jet engines. Above this is the center instrument panel. The bottom panel has the wing tip fold, landing gear, and flap controls, as well as the hydraulic pressure gages. In the center are three rows of engine gages. The top row are tachometers, the second are exhaust temperature gages, and the bottom row are exhaust nozzle position indicators. Above these are the engine fire and engine brake switches. The instrument panels for the pilot (left) and co-pilot (right) differ somewhat. Both crewmen have an airspeed/Mach indicator, and altitude/vertical velocity indicator, an artificial horizon, and a heading indicator/compass directly in front of them. The pilot's flight instruments, from top to bottom, are total heat gage and crew warning lights; stand-by flight instruments (side-slip, artificial horizon, and altitude); the engine vibration indicators; cabin altitude, ammonia, and water quantity gages, the electronic compartment air temperature gage, and the liquid oxygen quantity gage. At the bottom are the switches for the flight displays and environmental controls. On the co-pilot's panel, the top three rows are for the engine inlet controls. Below this is the fuel tank sequence indicator, which shows the amount of fuel in each tank. The bottom row consists of the fuel pump switches, which were used to shift fuel to maintain the proper center of gravity. Just to the right are the indicators for the total fuel (top) and the individual tanks (bottom). Visible on the right edge of the photo are the refueling valves, while above these are switches for the flight data recording instruments. The XB-70

  6. [Comparison of dose calculation algorithms in stereotactic radiation therapy in lung].

    PubMed

    Tomiyama, Yuki; Araki, Fujio; Kanetake, Nagisa; Shimohigashi, Yoshinobu; Tominaga, Hirofumi; Sakata, Jyunichi; Oono, Takeshi; Kouno, Tomohiro; Hioki, Kazunari

    2013-06-01

    Dose calculation algorithms in radiation treatment planning systems (RTPSs) play a crucial role in stereotactic body radiation therapy (SBRT) in the lung with heterogeneous media. This study investigated the performance and accuracy of dose calculation for three algorithms: analytical anisotropic algorithm (AAA), pencil beam convolution (PBC) and Acuros XB (AXB) in Eclipse (Varian Medical Systems), by comparison against the Voxel Monte Carlo algorithm (VMC) in iPlan (BrainLab). The dose calculations were performed with clinical lung treatments under identical planning conditions, and the dose distributions and the dose volume histogram (DVH) were compared among algorithms. AAA underestimated the dose in the planning target volume (PTV) compared to VMC and AXB in most clinical plans. In contrast, PBC overestimated the PTV dose. AXB tended to slightly overestimate the PTV dose compared to VMC but the discrepancy was within 3%. The discrepancy in the PTV dose between VMC and AXB appears to be due to differences in physical material assignments, material voxelization methods, and an energy cut-off for electron interactions. The dose distributions in lung treatments varied significantly according to the calculation accuracy of the algorithms. VMC and AXB are better algorithms than AAA for SBRT. PMID:23782779

  7. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  8. XB130: A novel adaptor protein in cancer signal transduction

    PubMed Central

    ZHANG, RUIYAO; ZHANG, JINGYAO; WU, QIFEI; MENG, FANDI; LIU, CHANG

    2016-01-01

    Adaptor proteins are functional proteins that contain two or more protein-binding modules to link signaling proteins together, which affect cell growth and shape and have no enzymatic activity. The actin filament-associated protein (AFAP) family is an important member of the adaptor proteins, including AFAP1, AFAP1L1 and AFAP1L2/XB130. AFAP1 and AFAP1L1 share certain common characteristics and function as an actin-binding protein and a cSrc-activating protein. XB130 exhibits certain unique features in structure and function. The mRNA of XB130 is expressed in human spleen, thyroid, kidney, brain, lung, pancreas, liver, colon and stomach, and the most prominent disease associated with XB130 is cancer. XB130 has a controversial effect on cancer. Studies have shown that XB130 can promote cancer progression and downregulation of XB130-reduced growth of tumors derived from certain cell lines. A higher mRNA level of XB130 was shown to be associated with a better survival in non-small cell lung cancer. Previous studies have shown that XB130 can regulate cell growth, migration and invasion and possibly has the effect through the cAMP-cSrc-phosphoinositide 3-kinase/Akt pathway. Except for cancer, XB130 is also associated with other pathological or physiological procedures, such as airway repair and regeneration. PMID:26998266

  9. XB-70A landing with drag chutes deployed

    NASA Technical Reports Server (NTRS)

    1960-01-01

    This photo shows the XB-70A #1 rolling out after landing, employing drag chutes to slow down. In the photo, the outer wing panels are slightly raised. When the XB-70 was flying at high speed, the panels were lowered to improve stability. The XB-70 was the world's largest experimental aircraft. It was capable of flight at speeds of three times the speed of sound (roughly 2,000 miles per hour) at altitudes of 70,000 feet. It was used to collect in-flight information for use in the design of future supersonic aircraft, military and civilian. The major objectives of the XB-70 flight research program were to study the airplane's stability and handling characteristics, to evaluate its response to atmospheric turbulence, and to determine the aerodynamic and propulsion performance. In addition there were secondary objectives to measure the noise and friction associated with airflow over the airplane and to determine the levels and extent of the engine noise during takeoff, landing, and ground operations. The XB-70 was about 186 feet long, 33 feet high, with a wingspan of 105 feet. Originally conceived as an advanced bomber for the United States Air Force, the XB-70 was limited to production of two aircraft when it was decided to limit the aircraft's mission to flight research. The first flight of the XB-70 was made on Sept. 21, 1964. The number two XB-70 was destroyed in a mid-air collision on June 8, 1966. Program management of the NASA-USAF research effort was assigned to NASA in March 1967. The final flight was flown on Feb. 4, 1969. Designed by North American Aviation (later North American Rockwell and still later, a division of Boeing) the XB-70 had a long fuselage with a canard or horizontal stabilizer mounted just behind the crew compartment. It had a sharply swept 65.6-percent delta wing. The outer portion of the wing could be folded down in flight to provide greater lateral-directional stability. The airplane had two windshields. A moveable outer windshield was

  10. Evaluation of an analytic linear Boltzmann transport equation solver for high-density inhomogeneities

    SciTech Connect

    Lloyd, S. A. M.; Ansbacher, W.

    2013-01-15

    Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are used to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements

  11. XB130 expression in human osteosarcoma: a clinical and experimental study.

    PubMed

    Wang, Xiaohui; Wang, Ruiguo; Liu, Zhaolong; Hao, Fengyun; Huang, Hai; Guo, Wenchen

    2015-01-01

    Identifying prognostic factors for osteosarcoma (OS) aids in the selection of patients who require more aggressive management. XB130 is a newly characterized adaptor protein that was reported to be a prognostic factor of certain tumor types. However, the association between XB130 expression and the prognosis of OS remains unknown. In the present study, we investigated the association between XB130 expression and clinicopathologic features and prognosis in patients suffering OS, and further investigated its potential role on OS cells in vitro and vivo. A retrospective immunohistochemical study of XB130 was performed on archival formalin-fixed paraffin-embedded specimens from 60 pairs of osteosarcoma and noncancerous bone tissues, and compared the expression of XB130 with clinicopathological parameters. We then investigate the effect of XB130 sliencing on invasion in vitro and lung metastasis in vivo of the human OS cell line. Immunohistochemical assays revealed that XB130 expression in OS tissues was significantly higher than that in corresponding noncancerous bone tissues (P=0.001). In addition, high XB130 expression more frequently occurred in OS tissues with advanced clinical stage (P=0.002) and positive distant metastasis (P=0.001). Moreover, OS patients with high XB130 expression had significantly shorter overall survival and disease-free survival (both P<0.001) when compared with patients with the low expression of XB130. The univariate analysis and multivariate analysis shown that high XB130 expression and distant metastasis were the independent poor prognostic factor.We showed that XB130 depletion by RNA interference inhibited invasion of XB130-rich U2OS cells in vitro and lung metastasis in vivo. This is the first study to reveal that XB130 overexpression may be related to the prediction of metastasis potency and poor prognosis for OS patients, suggesting that XB130 may serve as a prognostic marker for the optimization of clinical treatments. Furthermore

  12. Performance of dose calculation algorithms from three generations in lung SBRT: comparison with full Monte Carlo-based dose distributions.

    PubMed

    Ojala, Jarkko J; Kapanen, Mika K; Hyödynmaa, Simo J; Wigren, Tuija K; Pitkänen, Maunu A

    2014-01-01

    The accuracy of dose calculation is a key challenge in stereotactic body radiotherapy (SBRT) of the lung. We have benchmarked three photon beam dose calculation algorithms--pencil beam convolution (PBC), anisotropic analytical algorithm (AAA), and Acuros XB (AXB)--implemented in a commercial treatment planning system (TPS), Varian Eclipse. Dose distributions from full Monte Carlo (MC) simulations were regarded as a reference. In the first stage, for four patients with central lung tumors, treatment plans using 3D conformal radiotherapy (CRT) technique applying 6 MV photon beams were made using the AXB algorithm, with planning criteria according to the Nordic SBRT study group. The plans were recalculated (with same number of monitor units (MUs) and identical field settings) using BEAMnrc and DOSXYZnrc MC codes. The MC-calculated dose distributions were compared to corresponding AXB-calculated dose distributions to assess the accuracy of the AXB algorithm, to which then other TPS algorithms were compared. In the second stage, treatment plans were made for ten patients with 3D CRT technique using both the PBC algorithm and the AAA. The plans were recalculated (with same number of MUs and identical field settings) with the AXB algorithm, then compared to original plans. Throughout the study, the comparisons were made as a function of the size of the planning target volume (PTV), using various dose-volume histogram (DVH) and other parameters to quantitatively assess the plan quality. In the first stage also, 3D gamma analyses with threshold criteria 3%/3mm and 2%/2 mm were applied. The AXB-calculated dose distributions showed relatively high level of agreement in the light of 3D gamma analysis and DVH comparison against the full MC simulation, especially with large PTVs, but, with smaller PTVs, larger discrepancies were found. Gamma agreement index (GAI) values between 95.5% and 99.6% for all the plans with the threshold criteria 3%/3 mm were achieved, but 2%/2 mm

  13. SU-E-T-131: Dosimetric Impact and Evaluation of Different Heterogenity Algorithm in Volumetric Modulated Arc Therapy Plan for Stereotactic Ablative Radiotherapy Lung Treatment with the Flattening Filter Free Beam

    SciTech Connect

    Chung, J; Kim, J; Lee, J; Kim, Y

    2014-06-01

    Purpose: The present study aimed to investigate the dosimetric impacts of the anisotropic analytic algorithm (AAA) and the Acuros XB (AXB) plan for lung stereotactic ablative radiation therapy using flattening filter-free (FFF) beam. We retrospectively analyzed 10 patients. Methods: We retrospectively analyzed 10 patients. The dosimetric parameters for the target and organs at risk (OARs) from the treatment plans calculated with these dose calculation algorithms were compared. The technical parameters, such as the computation times and the total monitor units (MUs), were also evaluated. Results: A comparison of DVHs from AXB and AAA showed that the AXB plan produced a high maximum PTV dose by average 4.40% with a statistical significance but slightly lower mean PTV dose by average 5.20% compared to the AAA plans. The maximum dose to the lung was slightly higher in the AXB compared to the AAA. For both algorithms, the values of V5, V10 and V20 for ipsilateral lung were higher in the AXB plan more than those of AAA. However, these parameters for contralateral lung were comparable. The differences of maximum dose for the spinal cord and heart were also small. The computation time of AXB was found fast with the relative difference of 13.7% than those of AAA. The average of monitor units (MUs) for all patients was higher in AXB plans than in the AAA plans. These results indicated that the difference between AXB and AAA are large in heterogeneous region with low density. Conclusion: The AXB provided the advantages such as the accuracy of calculations and the reduction of the computation time in lung stereotactic ablative radiotherapy (SABR) with using FFF beam, especially for VMAT planning. In dose calculation with the media of different density, therefore, the careful attention should be taken regarding the impacts of different heterogeneity correction algorithms. The authors report no conflicts of interest.

  14. A summary of XB-70 sonic boom signature data

    NASA Technical Reports Server (NTRS)

    Maglieri, Domenic J.; Sothcott, Victor E.; Keefer, Thomas N., Jr.

    1992-01-01

    A compilation is provided of measured sonic boom signature data derived from 39 supersonic flights (43 passes) of the XB-70 airplane over the Mach number range of 1.11 to 2.92 and an altitude range of 30500 to 70300 ft. These tables represent a convenient hard copy version of available electronic files which include over 300 digitized sonic boom signatures with their corresponding spectra. Also included in the electronic files is information regarding ground track position, aircraft operating conditions, and surface and upper air weather observations for each of the 43 supersonic passes. In addition to the sonic boom signature data, a description is also provided of the XB-70 data base that was placed on electronic files along with a description of the method used to scan and digitize the analog/oscillograph sonic boom signature time histories. Such information is intended to enhance the value and utilization of the electronic files.

  15. Application of the QCD light cone sum rule to tetraquarks: The strong vertices XbXbρ and XcXcρ

    NASA Astrophysics Data System (ADS)

    Agaev, S. S.; Azizi, K.; Sundu, H.

    2016-06-01

    The full version of the QCD light-cone sum rule method is applied to tetraquarks containing a single heavy b or c quark. To this end, investigations of the strong vertices XbXbρ and XcXcρ are performed, where Xb=[s u ][b ¯ d ¯ ] and Xc=[s u ][c ¯d ¯] are the exotic states built of four quarks of different flavors. The strong coupling constants GXbXbρ and GXcXcρ corresponding to these vertices are found using the ρ -meson leading- and higher-twist distribution amplitudes. In the calculations, Xb and Xc are treated as scalar bound states of a diquark and antidiquark.

  16. XB130 expression in human osteosarcoma: a clinical and experimental study.

    PubMed

    Wang, Xiaohui; Wang, Ruiguo; Liu, Zhaolong; Hao, Fengyun; Huang, Hai; Guo, Wenchen

    2015-01-01

    Identifying prognostic factors for osteosarcoma (OS) aids in the selection of patients who require more aggressive management. XB130 is a newly characterized adaptor protein that was reported to be a prognostic factor of certain tumor types. However, the association between XB130 expression and the prognosis of OS remains unknown. In the present study, we investigated the association between XB130 expression and clinicopathologic features and prognosis in patients suffering OS, and further investigated its potential role on OS cells in vitro and vivo. A retrospective immunohistochemical study of XB130 was performed on archival formalin-fixed paraffin-embedded specimens from 60 pairs of osteosarcoma and noncancerous bone tissues, and compared the expression of XB130 with clinicopathological parameters. We then investigate the effect of XB130 sliencing on invasion in vitro and lung metastasis in vivo of the human OS cell line. Immunohistochemical assays revealed that XB130 expression in OS tissues was significantly higher than that in corresponding noncancerous bone tissues (P=0.001). In addition, high XB130 expression more frequently occurred in OS tissues with advanced clinical stage (P=0.002) and positive distant metastasis (P=0.001). Moreover, OS patients with high XB130 expression had significantly shorter overall survival and disease-free survival (both P<0.001) when compared with patients with the low expression of XB130. The univariate analysis and multivariate analysis shown that high XB130 expression and distant metastasis were the independent poor prognostic factor.We showed that XB130 depletion by RNA interference inhibited invasion of XB130-rich U2OS cells in vitro and lung metastasis in vivo. This is the first study to reveal that XB130 overexpression may be related to the prediction of metastasis potency and poor prognosis for OS patients, suggesting that XB130 may serve as a prognostic marker for the optimization of clinical treatments. Furthermore

  17. Delta-f particle-in-cell simulation of X-B mode conversion

    NASA Astrophysics Data System (ADS)

    Xiang, N.; Cary, J. R.; Barnes, D. C.; Carlsson, J.

    2006-04-01

    Low-noise, delta-f particle-in-cell algorithm has been implemented in VORPAL, a massive parallel, hybrid plasma modeling code (Chet Nieter and John. R. Cary, J. Comp. Physics 196, 448 (2004)). This computation method allows us to simulate the mode conversion between the extraordinary wave (X) and electron Bernstein wave (EBW) in both linear and nonlinear regimes. In the linear regime, it is found that a full X-B mode conversion can be obtained for optimized parameters as φ/φce<2 (φ is the driving frequency and φce is the electron cyclotron frequency). No 100% conversion is found for φ/φce moderately larger than 2. The simulation results agree with the predictions of Ram's theory (Ram & Schultz, Phys. Plasma 4084 (2000)). The agreement indicates that X-B mode conversion can be well described by the quadratic wave equation based on cold plasma approximation, and this is consistent with the phase-space picture of mode conversion. It is also shown that the conversion efficiency is significantly affected by the gradient of magnetic fields. When the amplitude of the incident X wave increases, it is shown that the nonlinear self-interaction of the electron converted EBW gives rise to the second harmonic generation at a pump power as low as three orders smaller than the electron thermal energy. If the fundamental EBW is sufficiently large, the non-propagating third and fourth harmonic modes are also generated. *The work was supported by DOE Contract No.DE-FG02-04ER54735.

  18. Photon beam dosimetry with EBT3 film in heterogeneous regions: Application to the evaluation of dose-calculation algorithms

    NASA Astrophysics Data System (ADS)

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Byungdo; Cheong, Kwang-Ho

    2014-12-01

    For a better understanding of the accuracy of state-of-the-art-radiation therapies, 2-dimensional dosimetry in a patient-like environment will be helpful. Therefore, the dosimetry of EBT3 films in non-water-equivalent tissues was investigated, and the accuracy of commercially-used dose-calculation algorithms was evaluated with EBT3 measurement. Dose distributions were measured with EBT3 films for an in-house-designed phantom that contained a lung or a bone substitute, i.e., an air cavity (3 × 3 × 3 cm3) or teflon (2 × 2 × 2 cm3 or 3 × 3 × 3 cm3), respectively. The phantom was irradiated with 6-MV X-rays with field sizes of 2 × 2, 3 × 3, and 5 × 5 cm2. The accuracy of EBT3 dosimetry was evaluated by comparing the measured dose with the dose obtained from Monte Carlo (MC) simulations. A dose-to-bone-equivalent material was obtained by multiplying the EBT3 measurements by the stopping power ratio (SPR). The EBT3 measurements were then compared with the predictions from four algorithms: Monte Carlo (MC) in iPlan, acuros XB (AXB), analytical anisotropic algorithm (AAA) in Eclipse, and superposition-convolution (SC) in Pinnacle. For the air cavity, the EBT3 measurements agreed with the MC calculation to within 2% on average. For teflon, the EBT3 measurements differed by 9.297% (±0.9229%) on average from the Monte Carlo calculation before dose conversion, and by 0.717% (±0.6546%) after applying the SPR. The doses calculated by using the MC, AXB, AAA, and SC algorithms for the air cavity differed from the EBT3 measurements on average by 2.174, 2.863, 18.01, and 8.391%, respectively; for teflon, the average differences were 3.447, 4.113, 7.589, and 5.102%. The EBT3 measurements corrected with the SPR agreed with 2% on average both within and beyond the heterogeneities with MC results, thereby indicating that EBT3 dosimetry can be used in heterogeneous media. The MC and the AXB dose calculation algorithms exhibited clinically-acceptable accuracy (<5%) in

  19. Kinetic simulations of X-B and O-X-B mode conversion

    SciTech Connect

    Arefiev, A. V.; Du Toit, E. J.; Vann, R. G. L.; Köhn, A.; Holzhauer, E.; Shevchenko, V. F.

    2015-12-10

    We have performed fully-kinetic simulations of X-B and O-X-B mode conversion in one and two dimensional setups using the PIC code EPOCH. We have recovered the linear dispersion relation for electron Bernstein waves by employing relatively low amplitude incoming waves. The setups presented here can be used to study non-linear regimes of X-B and O-X-B mode conversion.

  20. XB130 deficiency enhances lipopolysaccharide-induced septic response and acute lung injury

    PubMed Central

    Toba, Hiroaki; Tomankova, Tereza; Wang, Yingchun; Bai, Xiaohui; Cho, Hae-Ra; Guan, Zhehong; Adeyi, Oyedele A.; Tian, Feng; Keshavjee, Shaf; Liu, Mingyao

    2016-01-01

    XB130 is a novel oncoprotein that promotes cancer cell survival, proliferation and migration. Its physiological function in vivo is largely unknown. The objective of this study was to determine the role of XB130 in lipopolysaccharide (LPS)-induced septic responses and acute lung injury. LPS was intraperitoneally administrated to Xb130 knockout (KO) and wild type (WT) mice. There was a significant weight loss in KO mice at Day 2 and significantly higher disease scores during the 7 days of observation. The levels of tumor necrosis factor-alpha, monocyte chemoattractant protein-1, interleukin-6 and interleukin-10 in the serum were significantly higher in KO mice at Day 2. In KO mice there were a significantly higher lung injury score, higher wet/dry lung weight ratio, more apoptotic cells and less proliferative cells in the lung. Macrophage infiltration was significantly elevated in the lung of KO mice. There was significantly increased number of p-GSK-3β positive cells in KO mice, which were mainly neutrophils and macrophages. XB130 is expressed in alveolar type I and type II cells in the lung. The expression in these cells was significantly reduced after LPS challenge. XB130 deficiency delayed the recovery from systemic septic responses, and the presence of XB130 in the alveolar epithelial cells may provide protective mechanisms by reducing cell death and promoting cell proliferation, and reducing pulmonary permeability. PMID:27029000

  1. X-15 and XB-70 parked on NASA ramp

    NASA Technical Reports Server (NTRS)

    1967-01-01

    The X-15A-2 with drop tanks and ablative coating is shown parked on the NASA ramp in front of the XB-70. These aircraft represent two different approaches to flight research. The X-15 was a research airplane in the purest sense, whereas the XB-70 was an experimental bomber intended for production but diverted to research when production was cancelled by changes in the Department of Defense's offensive doctrine. The X-15A-2 had been modified from its original configuration with a longer fuselage and drop tanks. To protect it against aerodynamic heating, researchers had coated it with an ablative coating covered by a layer of white paint. These changes allowed the X-15A-2 to reach a maximum speed of Mach 6.7, although it could be sustained for only a brief period. The XB-70, by contrast, was designed for prolonged high-altitude cruise flight at Mach 3. The aircraft's striking shape--with a long forward fuselage, canards, a large delta wing, twin fins, and a box-like engine bay--allowed it to ride its own Mach 3 shockwave, so to speak. A joint NASA-Air Force program used the aircraft to collect data in support of the U.S supersonic transport (SST) program, which never came to fruition because of environmental concerns. X-15: The X-15 was a rocket-powered aircraft. The original three aircraft were about 50 ft long with a wingspan of 22 ft. The modified #2 aircraft (X-15A-2 was longer.) They were a missile-shaped vehicles with unusual wedge-shaped vertical tails, thin stubby wings, and unique side fairings that extended along the side of the fuselage. The X-15 weighed about 14,000 lb empty and approximately 34,000 lb at launch. The XLR-99 rocket engine, manufactured by Thiokol Chemical Corp., was pilot controlled and was rated at 57,000 lb of thrust, although there are indications that it actually achieved up to 60,000 lb. North American Aviation built three X-15 aircraft for the program. The X-15 research aircraft was developed to provide in-flight information and data

  2. XB130 promotes bronchioalveolar stem cell and Club cell proliferation in airway epithelial repair and regeneration

    PubMed Central

    Toba, Hiroaki; Wang, Yingchun; Bai, Xiaohui; Zamel, Ricardo; Cho, Hae-Ra; Liu, Hongmei; Lira, Alonso; Keshavjee, Shaf; Liu, Mingyao

    2015-01-01

    Proliferation of bronchioalveolar stem cells (BASCs) is essential for epithelial repair. XB130 is a novel adaptor protein involved in the regulation of epithelial cell survival, proliferation and migration through the PI3K/Akt pathway. To determine the role of XB130 in airway epithelial injury repair and regeneration, a naphthalene-induced airway epithelial injury model was used with XB130 knockout (KO) mice and their wild type (WT) littermates. In XB130 KO mice, at days 7 and 14, small airway epithelium repair was significantly delayed with fewer number of Club cells (previously called Clara cells). CCSP (Club cell secreted protein) mRNA expression was also significantly lower in KO mice at day 7. At day 5, there were significantly fewer proliferative epithelial cells in the KO group, and the number of BASCs significantly increased in WT mice but not in KO mice. At day 7, phosphorylation of Akt, GSK-3β, and the p85α subunit of PI3K was observed in airway epithelial cells in WT mice, but to a much lesser extent in KO mice. Microarray data also suggest that PI3K/Akt-related signals were regulated differently in KO and WT mice. An inhibitory mechanism for cell proliferation and cell cycle progression was suggested in KO mice. XB130 is involved in bronchioalveolar stem cell and Club cell proliferation, likely through the PI3K/Akt/GSK-3β pathway. PMID:26360608

  3. X (3872 ) , Xb , and the χb 1(3 P ) state

    NASA Astrophysics Data System (ADS)

    Karliner, Marek; Rosner, Jonathan L.

    2015-01-01

    We discuss the possible production and discovery channels in e+e- and p p machines of the Xb, the bottomonium counterpart of X (3872 ) and the putative isoscalar analogue of the charged bottomoniumlike states Zb discovered by Belle. We suggest that the Xb may be close in mass to the bottomonium state χb 1(3 P ), mixing with it and sharing its decay channels, just as X (3872 ) is likely a mixture of a D ¯D* molecule and χc 1(2 P ) . Consequently, the experiments which reported observing χb 1(3 P ) might have actually discovered the Xb, or a mixture of the two states.

  4. XB-70A #1 liftoff with TB-58A chase aircraft

    NASA Technical Reports Server (NTRS)

    1960-01-01

    This photo shows XB-70A #1 taking off on a research flight, escorted by a TB-58 chase plane. The TB-58 (a prototype B-58 modified as a trainer) had a dash speed of Mach 2. This allowed it to stay close to the XB-70 as it conducted its research maneuvers. When the XB-70 was flying at or near Mach 3, the slower TB-58 could often keep up with it by flying lower and cutting inside the turns in the XB-70's flight path when these occurred. The XB-70 was the world's largest experimental aircraft. It was capable of flight at speeds of three times the speed of sound (roughly 2,000 miles per hour) at altitudes of 70,000 feet. It was used to collect in-flight information for use in the design of future supersonic aircraft, military and civilian. The major objectives of the XB-70 flight research program were to study the airplane's stability and handling characteristics, to evaluate its response to atmospheric turbulence, and to determine the aerodynamic and propulsion performance. In addition there were secondary objectives to measure the noise and friction associated with airflow over the airplane and to determine the levels and extent of the engine noise during takeoff, landing, and ground operations. The XB-70 was about 186 feet long, 33 feet high, with a wingspan of 105 feet. Originally conceived as an advanced bomber for the United States Air Force, the XB-70 was limited to production of two aircraft when it was decided to limit the aircraft's mission to flight research. The first flight of the XB-70 was made on Sept. 21, 1964. The number two XB-70 was destroyed in a mid-air collision on June 8, 1966. Program management of the NASA-USAF research effort was assigned to NASA in March 1967. The final flight was flown on Feb. 4, 1969. Designed by North American Aviation (later North American Rockwell and still later, a division of Boeing) the XB-70 had a long fuselage with a canard or horizontal stabilizer mounted just behind the crew compartment. It had a sharply swept 65

  5. The high coercivity mechanism for Nd 16Fe 77-xAl x>B 7 magnets

    NASA Astrophysics Data System (ADS)

    Hu, Jifan; Wang, Yizhong; Feng, Minying; Dai, Daoyang; Wang, Zhenxi; Cao, Yongjing

    1989-10-01

    Nd 16Fe 77- xAl xB 7 ( x = 0-7) sintered magnets with the maximum coercivity iHc = 20.5 kOe at x = 5 were obtained. The magnetic anisotropy fields of these magnets decrease with the addition of Al. The initial magnetizing field dependence of the coercivity for Nd 16Fe 77- xAl xB 7 sintered magnets in the thermally demagnetized state was determined. The result clearly indicates that the Nd 16Fe 77- xAl xB 7 sintered magnets are nucleation-hardened. The result of SEM shows that the grain boundary of the main phase in the Nd 16Fe 77B 7 magnet is clear, but not in the Nd 16Fe 77- xAl xB 7 ( x = 5) magnet with a high coercivity of 20.5 kOe. The results of SEM for Nd 16Fe 77- xAl xB 7 ( x = 5) magnet also show that there a new floss shaped phase is precipitated within the Nd-rich phase. With energy dispersive X-ray spectra, we determined the composition of this precipitation: Nd: Fe: Al = 76: 3.4: 20.6. The increase of coercivity iHc with Al can be attributed to better magnetic decoupling of the grains.

  6. Mediators-assisted reductive biotransformation of tetrabromobisphenol-A by Shewanella sp. XB.

    PubMed

    Wang, Jing; Fu, Zhenzhen; Liu, Guangfei; Guo, Ning; Lu, Hong; Zhan, Yaoyao

    2013-08-01

    The anaerobic biotransformation of tetrabromobisphenol A (TBBPA) was mainly observed in the consortia so far. The role of redox mediators in anaerobic TBBPA biotransformation by Shewanella sp. distributed widely in environments was investigated for the first time. The results showed the flavins secretion of Shewanella sp. XB was highly dependent on initial TBBPA concentration. The corresponding first-order rate constants (k) of TBBPA transformation decreased to 0.007 d(-1) when TBBPA concentration increased up to 80 mg/L. Moreover, the removal rate of TBBPA (80 mg/L) was significantly enhanced in treatments amended with cyanocobalamin, riboflavin, 2-hydroxy-1,4-naphthoquinone and Aldrich humic acid with k values of 0.42, 0.19, 0.16, and 0.07 d(-1), respectively. In addition, some redox proteins were secreted and played a role in flavins-mediated extracellular biotransformation of TBBPA by Shewanella sp. XB. These findings are beneficial to better understand TBBPA fate in natural environments and to develop efficient biotreatment strategies of TBBPA pollutions.

  7. Measured Sonic Boom Signatures Above and Below the XB-70 Airplane Flying at Mach 1.5 and 37,000 Feet

    NASA Technical Reports Server (NTRS)

    Maglieri, Domenic J.; Henderson, Herbert R.; Tinetti, Ana F.

    2011-01-01

    During the 1966-67 Edwards Air Force Base (EAFB) National Sonic Boom Evaluation Program, a series of in-flight flow-field measurements were made above and below the USAF XB-70 using an instrumented NASA F-104 aircraft with a specially designed nose probe. These were accomplished in the three XB-70 flights at about Mach 1.5 at about 37,000 ft. and gross weights of about 350,000 lbs. Six supersonic passes with the F-104 probe aircraft were made through the XB-70 shock flow-field; one above and five below the XB-70. Separation distances ranged from about 3000 ft. above and 7000 ft. to the side of the XB-70 and about 2000 ft. and 5000 ft. below the XB-70. Complex near-field "sawtooth-type" signatures were observed in all cases. At ground level, the XB-70 shock waves had not coalesced into the two-shock classical sonic boom N-wave signature, but contained three shocks. Included in this report is a description of the generating and probe airplanes, the in-flight and ground pressure measuring instrumentation, the flight test procedure and aircraft positioning, surface and upper air weather observations, and the six in-flight pressure signatures from the three flights.

  8. Valence fluctuations of europium in the boride Eu4Pd(29+x)B8.

    PubMed

    Gumeniuk, Roman; Schnelle, Walter; Ahmida, Mahmoud A; Abd-Elmeguid, Mohsen M; Kvashnina, Kristina O; Tsirlin, Alexander A; Leithe-Jasper, Andreas; Geibel, Christoph

    2016-03-23

    We synthesized a high-quality sample of the boride Eu4Pd(29+x)B8 (x  =  0.76) and studied its structural and physical properties. Its tetragonal structure was solved by direct methods and confirmed to belong to the Eu4Pd29B8 type. All studied physical properties indicate a valence fluctuating Eu state, with a valence decreasing continuously from about 2.9 at 5 K to 2.7 at 300 K. Maxima in the T dependence of the susceptibility and thermopower at around 135 K and 120 K, respectively, indicate a valence fluctuation energy scale on the order of 300 K. Analysis of the magnetic susceptibility evidences some inconsistencies when using the ionic interconfigurational fluctuation (ICF) model, thus suggesting a stronger relevance of hybridization between 4f and valence electrons compared to standard valence-fluctuating Eu systems.

  9. Vibration Survey of Blades in 19XB Axial-Flow Compressor. 2; Dynamic Investigation

    NASA Technical Reports Server (NTRS)

    Meyer, Andre J., Jr.; Calvert, Howard F.

    1947-01-01

    Strain-gage measurements were taken under operating conditions from blades of various stages of the 19XB axial-flow compressor in an effort to determine the reason for failures in the seventh and tenth stages. First bending-mode vibrations were detected in the first five stages of the compressor caused by each integral multiple of rotor speed from three through ten. Lead-wire failures in the last five stages resulted in incomplete data. The dynamic-vibration frequencies at various rotor speeds were compared with statically measured frequencies analytically corrected for the influence of centrifugal force. Large increases in vibration anilitude with increased pressure ratio were observed. During surging operation, blade vibrations were not present. The effects of pressure ratio and surge indicate the existence of aerodynamic excitation as the cause of the blade vibrations.

  10. Valence fluctuations of europium in the boride Eu4Pd(29+x)B8.

    PubMed

    Gumeniuk, Roman; Schnelle, Walter; Ahmida, Mahmoud A; Abd-Elmeguid, Mohsen M; Kvashnina, Kristina O; Tsirlin, Alexander A; Leithe-Jasper, Andreas; Geibel, Christoph

    2016-03-23

    We synthesized a high-quality sample of the boride Eu4Pd(29+x)B8 (x  =  0.76) and studied its structural and physical properties. Its tetragonal structure was solved by direct methods and confirmed to belong to the Eu4Pd29B8 type. All studied physical properties indicate a valence fluctuating Eu state, with a valence decreasing continuously from about 2.9 at 5 K to 2.7 at 300 K. Maxima in the T dependence of the susceptibility and thermopower at around 135 K and 120 K, respectively, indicate a valence fluctuation energy scale on the order of 300 K. Analysis of the magnetic susceptibility evidences some inconsistencies when using the ionic interconfigurational fluctuation (ICF) model, thus suggesting a stronger relevance of hybridization between 4f and valence electrons compared to standard valence-fluctuating Eu systems. PMID:26895077

  11. In vivo verification of radiation dose delivered to healthy tissue during radiotherapy for breast cancer

    NASA Astrophysics Data System (ADS)

    Lonski, P.; Taylor, M. L.; Hackworth, W.; Phipps, A.; Franich, R. D.; Kron, T.

    2014-03-01

    Different treatment planning system (TPS) algorithms calculate radiation dose in different ways. This work compares measurements made in vivo to the dose calculated at out-of-field locations using three different commercially available algorithms in the Eclipse treatment planning system. LiF: Mg, Cu, P thermoluminescent dosimeter (TLD) chips were placed with 1 cm build-up at six locations on the contralateral side of 5 patients undergoing radiotherapy for breast cancer. TLD readings were compared to calculations of Pencil Beam Convolution (PBC), Anisotropic Analytical Algorithm (AAA) and Acuros XB (XB). AAA predicted zero dose at points beyond 16 cm from the field edge. In the same region PBC returned an unrealistically constant result independent of distance and XB showed good agreement to measured data although consistently underestimated by ~0.1 % of the prescription dose. At points closer to the field edge XB was the superior algorithm, exhibiting agreement with TLD results to within 15 % of measured dose. Both AAA and PBC showed mixed agreement, with overall discrepancies considerably greater than XB. While XB is certainly the preferable algorithm, it should be noted that TPS algorithms in general are not designed to calculate dose at peripheral locations and calculation results in such regions should be treated with caution.

  12. 2,3,7,8-tetrachlorodibenzo-p-dioxin: examination of biochemical effects involved in the proliferation and differentiation of XB cells

    SciTech Connect

    Knutson, J.C.; Poland, A.

    1984-10-01

    XB, a cell line derived form a mouse teratoma, differentiates into stratified squamous epithelium when incubated with 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). To examine the mediators of this response the effects produced by TCDD and those elicited by other compounds which stimulated epidermal proliferation and/or differentiation in mice were compared, XB/3T3 cultures keratinize when incubated with cholera toxin, epidermal growth factor (EGF), or TCDD , but not 12-0-tetradecanoylphorbol-13-acetate (TPA). Incubation of XB cells with TCDD for 48 hours produces an increase in thymidine incorporation, a response which is neither as large nor as rapid as that produced by cholera toxin, TPA, or EGF. Although both cholera toxin and TCDD stimulate differentiation and thymidine incorporation in XB/3T3 cultures, cholera toxin increases cAMP 30-fold in these cells, while TCDD does not affect cAMP accumulation. Inhibitors of arachidonic acid metabolism, which block epidermal proliferative responses to TPA in vivo, do not prevent the differentiation of XB cells in response to TCDD. In XB/3T3 cultures, TPA stimulates arachidonic acid release at all times tested (1,6, and 24 hours) and increases the incorporation of /sup 32/P/sub i/ into total phospholipids and phosphatidyl-choline after 3 hours. In contrast, D affects neither arachidonic acid release nor the turnover of phosphatidylinositol, or phosphatidylcholine at any of the times tested. Although biochemical effects which have been suggested as part of the mechanism of TCDD and produced by other epidermal proliferative compounds in XB cells were examined, no mediator of the TCDD-produced differentiation of XB/3T3 cultures was observed.

  13. The origin of the n-type behavior in rare earth borocarbide Y1-xB28.5C4.

    PubMed

    Mori, Takao; Nishimura, Toshiyuki; Schnelle, Walter; Burkhardt, Ulrich; Grin, Yuri

    2014-10-28

    Synthesis conditions, morphology, and thermoelectric properties of Y1-xB28.5C4 were investigated. Y1-xB28.5C4 is the compound with the lowest metal content in a series of homologous rare earth borocarbonitrides, which have been attracting interest as high temperature thermoelectric materials because they can embody the long-awaited counterpart to boron carbide, one of the few thermoelectric materials with a history of commercialization. It was revealed that the presence of boron carbide inclusions was the origin of the p-type behavior previously observed for Y1-xB28.5C4 in contrast to Y1-xB15.5CN and Y1-xB22C2N. In comparison with that of previous small flux-grown single crystals, a metal-poor composition of YB40C6 (Y0.71B28.5C4) in the synthesis successfully yielded sintered bulk Y1-xB28.5C4 samples apparently free of boron carbide inclusions. "Pure" Y1-xB28.5C4 was found to exhibit the same attractive n-type behavior as the other rare earth borocarbonitrides even though it is the most metal-poor compound among the series. Calculations of the electronic structure were carried out for Y1-xB28.5C4 as a representative of the series of homologous compounds and reveal a pseudo gap-like electronic density of states near the Fermi level mainly originating from the covalent borocarbonitride network.

  14. Induction of truncated form of tenascin-X (XB-S) through dissociation of HDAC1 from SP-1/HDAC1 complex in response to hypoxic conditions

    SciTech Connect

    Kato, Akari; Endo, Toshiya; Abiko, Shun; Ariga, Hiroyoshi; Matsumoto, Ken-ichi

    2008-08-15

    ABSTRACT: XB-S is an amino-terminal truncated protein of tenascin-X (TNX) in humans. The levels of the XB-S transcript, but not those of TNX transcripts, were increased upon hypoxia. We identified a critical hypoxia-responsive element (HRE) localized to a GT-rich element positioned from - 1410 to - 1368 in the XB-S promoter. Using an electrophoretic mobility shift assay (EMSA), we found that the HRE forms a DNA-protein complex with Sp1 and that GG positioned in - 1379 and - 1378 is essential for the binding of the nuclear complex. Transfection experiments in SL2 cells, an Sp1-deficient model system, with an Sp1 expression vector demonstrated that the region from - 1380 to - 1371, an HRE, is sufficient for efficient activation of the XB-S promoter upon hypoxia. The EMSA and a chromatin immunoprecipitation (ChIP) assay showed that Sp1 together with the transcriptional repressor histone deacetylase 1 (HDAC1) binds to the HRE of the XB-S promoter under normoxia and that hypoxia causes dissociation of HDAC1 from the Sp1/HDAC1 complex. The HRE promoter activity was induced in the presence of a histone deacetylase inhibitor, trichostatin A, even under normoxia. Our results indicate that the hypoxia-induced activation of the XB-S promoter is regulated through dissociation of HDAC1 from an Sp1-binding HRE site.

  15. Comparative evaluation of modern dosimetry techniques near low- and high-density heterogeneities.

    PubMed

    Alhakeem, Eyad A; AlShaikh, Sami; Rosenfeld, Anatoly B; Zavgorodni, Sergei F

    2015-01-01

    The purpose of this study is to compare performance of several dosimetric meth-ods in heterogeneous phantoms irradiated by 6 and 18 MV beams. Monte Carlo (MC) calculations were used, along with two versions of Acuros XB, anisotropic analytical algorithm (AAA), EBT2 film, and MOSkin dosimeters. Percent depth doses (PDD) were calculated and measured in three heterogeneous phantoms. The first two phantoms were a 30 × 30 × 30 cm3 solid-water slab that had an air-gap of 20× 2.5 × 2.35 cm3. The third phantom consisted of 30 × 30 × 5 cm3 solid water slabs, two 30 × 30 × 5 cm3 slabs of lung, and one 30 × 30 × 1 cm3 solid water slab. Acuros XB, AAA, and MC calculations were within 1% in the regions with particle equilibrium. At media interfaces and buildup regions, differences between Acuros XB and MC were in the range of +4.4% to -12.8%. MOSkin and EBT2 measurements agreed to MC calculations within ~ 2.5%, except for the first cen-timeter of buildup where differences of 4.5% were observed. AAA did not predict the backscatter dose from the high-density heterogeneity. For the third, multilayer lung phantom, 6 MV beam PDDs calculated by all TPS algorithms were within 2% of MC. 18 MV PDDs calculated by two versions of Acuros XB and AAA differed from MC by up to 2.8%, 3.2%, and 6.8%, respectively. MOSkin and EBT2 each differed from MC by up to 2.9% and 2.5% for the 6 MV, and by -3.1% and ~2% for the 18 MV beams. All dosimetric techniques, except AAA, agreed within 3% in the regions with particle equilibrium. Differences between the dosimetric techniques were larger for the 18 MV than the 6 MV beam. MOSkin and EBT2 measurements were in a better agreement with MC than Acuros XB calculations at the interfaces, and they were in a better agreement to each other than to MC. The latter is due to their thinner detection layers compared to MC voxel sizes. PMID:26699322

  16. A Theoretical Investigation of the Dynamic Lateral Stability Characteristics of the MX-838 (XB-51) Airplane

    NASA Technical Reports Server (NTRS)

    Paulson, Jon W.

    1948-01-01

    At the request of the Air Material Command, U. S. Air Force, a theoretical study has been made of the dynamic lateral stability characteristics of the MX-838 (XB-51) airplane. The calculations included the determination of the neutral-oscillatory-stability boundary (R = 0), the period and time to damp to one-half amplitude of the lateral oscillation, end the time to damp to one-half amplitude for the spiral mode. Factors varied in the investigation were lift coefficient, wing incidence, wing loading, and altitude. The results of the investigation showed that the lateral oscillation of the airplane is unstable below a lift coefficient of 1.2 with flaps . deflected 40deg but is stable over the entire speed range with flaps deflected 20deg or 0deg. The results showed that satisfactory oscillatory stability can probably be obtained for all lift coefficients with the proper variation of flap deflection and wing incidence with airspeed. Reducing the positive wing incidence improved the oscillatory stability characteristics. The airplane is spirally unstable for most conditions but the instability is mild and the Air Force requirements are easily met.

  17. Dip Spectroscopy of the Low Mass X-Ray Binary XB 1254-690

    NASA Technical Reports Server (NTRS)

    Smale, Alan P.; Church, M. J.; BalucinskaChurch, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We observed the low mass X-ray binary XB 1254-690 with the Rossi X-ray Timing Explorer in 2001 May and December. During the first observation strong dipping on the 3.9-hr orbital period and a high degree of variability were observed, along with "shoulders" approx. 15% deep during extended intervals on each side of the main dips. The first observation also included pronounced flaring activity. The non-dip spectrum obtained using the PCA instrument was well-described by a two-component model consisting of a blackbody with kT = 1.30 +/- 0.10 keV plus a cut-off power law representation of Comptonized emission with power law photon index 1.10 +/- 0.46 and a cut-off energy of 5.9(sup +3.0, sub -1.4) keV. The intensity decrease in the shoulders of dipping is energy-independent, consistent with electron scattering in the outer ionized regions of the absorber. In deep dipping the depth of dipping reached 100%, in the energy band below 5 keV, indicating that all emitting regions were covered by absorber. Intensity-selected dip spectra were well-fit by a model in which the point-like blackbody is rapidly covered, while the extended Comptonized emission is progressively overlapped by the absorber, with the, covering fraction rising to 95% in the deepest portion of the dip. The intensity of this component in the dip spectra could be modeled by a combination of electron scattering and photoelectric absorption. Dipping did not occur during the 2001 December observation, but remarkably, both bursting and flaring were observed contemporaneously.

  18. Altitude-Wind-Tunnel investigation of Westinghouse 19B-2, 19B-8, and 19XB-1 Jet-Propulsion Engines IV : analysis of compressor performance

    NASA Technical Reports Server (NTRS)

    Dietz, Robert O; Kuenzig, John K

    1948-01-01

    Investigations were conducted in the NACA Cleveland altitude wind tunnel to determine the performance and operational characteristics of the 19B-2, 19B-6, and 19XB-1 Turbojet Engines. One objective of the investigations was to determine the effect of altitude, flight Mach number, and tail-pipe-nozzle area on the performance characteristics of the six-stage and ten-stage axial-flow compressors of the 19B-8 and 19XB-1 engines, respectively. The data were obtained over a range of simulated altitudes and flight Mach numbers. At each simulated flight condition the engine was run over its full operable range of speeds. Performance characteristics of the 19B-8 and 19XB-1 compressors for the range of operation obtainable in the turbojet-engine installation are presented. Compressor characteristics are presented as functions of air flow corrected to sea-level conditions, compressor Mach number, and compressor load coefficient.

  19. Direct X-B mode conversion for high-β national spherical torus experiment in nonlinear regime

    SciTech Connect

    Ali Asgarian, M. E-mail: maa@msu.edu; Parvazian, A.; Abbasi, M.; Verboncoeur, J. P.

    2014-09-15

    Electron Bernstein wave (EBW) can be effective for heating and driving currents in spherical tokamak plasmas. Power can be coupled to EBW via mode conversion of the extraordinary (X) mode wave. The most common and successful approach to study the conditions for optimized mode conversion to EBW was evaluated analytically and numerically using a cold plasma model and an approximate kinetic model. The major drawback in using radio frequency waves was the lack of continuous wave sources at very high frequencies (above the electron plasma frequency), which has been addressed. A future milestone is to approach high power regime, where the nonlinear effects become significant, exceeding the limits of validity for present linear theory. Therefore, one appropriate tool would be particle in cell (PIC) simulation. The PIC method retains most of the nonlinear physics without approximations. In this work, we study the direct X-B mode conversion process stages using PIC method for incident wave frequency f{sub 0} = 15 GHz, and maximum amplitude E{sub 0} = 10{sup 5 }V/m in the national spherical torus experiment (NSTX). The modelling shows a considerable reduction in X-B mode conversion efficiency, C{sub modelling} = 0.43, due to the presence of nonlinearities. Comparison of system properties to the linear state reveals predominant nonlinear effects; EBW wavelength and group velocity in comparison with linear regime exhibit an increment around ∼36% and 17%, respectively.

  20. Hyperactivation of the Human Plasma Membrane Ca2+ Pump PMCA h4xb by Mutation of Glu99 to Lys*

    PubMed Central

    Mazzitelli, Luciana R.; Adamo, Hugo P.

    2014-01-01

    The transport of calcium to the extracellular space carried out by plasma membrane Ca2+ pumps (PMCAs) is essential for maintaining low Ca2+ concentrations in the cytosol of eukaryotic cells. The activity of PMCAs is controlled by autoinhibition. Autoinhibition is relieved by the binding of Ca2+-calmodulin to the calmodulin-binding autoinhibitory sequence, which in the human PMCA is located in the C-terminal segment and results in a PMCA of high maximal velocity of transport and high affinity for Ca2+. Autoinhibition involves the intramolecular interaction between the autoinhibitory domain and a not well defined region of the molecule near the catalytic site. Here we show that the fusion of GFP to the C terminus of the h4xb PMCA causes partial loss of autoinhibition by specifically increasing the Vmax. Mutation of residue Glu99 to Lys in the cytosolic portion of the M1 transmembrane helix at the other end of the molecule brought the Vmax of the h4xb PMCA to near that of the calmodulin-activated enzyme without increasing the apparent affinity for Ca2+. Altogether, the results suggest that the autoinhibitory interaction of the extreme C-terminal segment of the h4 PMCA is disturbed by changes of negatively charged residues of the N-terminal region. This would be consistent with a recently proposed model of an autoinhibited form of the plant ACA8 pump, although some differences are noted. PMID:24584935

  1. Direct X-B mode conversion for high-β national spherical torus experiment in nonlinear regime

    NASA Astrophysics Data System (ADS)

    Ali Asgarian, M.; Parvazian, A.; Abbasi, M.; Verboncoeur, J. P.

    2014-09-01

    Electron Bernstein wave (EBW) can be effective for heating and driving currents in spherical tokamak plasmas. Power can be coupled to EBW via mode conversion of the extraordinary (X) mode wave. The most common and successful approach to study the conditions for optimized mode conversion to EBW was evaluated analytically and numerically using a cold plasma model and an approximate kinetic model. The major drawback in using radio frequency waves was the lack of continuous wave sources at very high frequencies (above the electron plasma frequency), which has been addressed. A future milestone is to approach high power regime, where the nonlinear effects become significant, exceeding the limits of validity for present linear theory. Therefore, one appropriate tool would be particle in cell (PIC) simulation. The PIC method retains most of the nonlinear physics without approximations. In this work, we study the direct X-B mode conversion process stages using PIC method for incident wave frequency f0 = 15 GHz, and maximum amplitude E0 = 105 V/m in the national spherical torus experiment (NSTX). The modelling shows a considerable reduction in X-B mode conversion efficiency, Cmodelling = 0.43, due to the presence of nonlinearities. Comparison of system properties to the linear state reveals predominant nonlinear effects; EBW wavelength and group velocity in comparison with linear regime exhibit an increment around ˜36% and 17%, respectively.

  2. Small field segments surrounded by large areas only shielded by a multileaf collimator: Comparison of experiments and dose calculation

    SciTech Connect

    Kron, T.; Clivio, A.; Vanetti, E.; Nicolini, G.; Cramb, J.; Lonski, P.; Cozzi, L.; Fogliata, A.

    2012-12-15

    Purpose: Complex radiotherapy fields delivered using a tertiary multileaf collimator (MLC) often feature small open segments surrounded by large areas of the beam only shielded by the MLC. The aim of this study was to test the ability of two modern dose calculation algorithms to accurately calculate the dose in these fields which would be common, for example, in volumetric modulated arc treatment (VMAT) and study the impact of variations in dosimetric leaf gap (DLG), focal spot size, and MLC transmission in the beam models. Methods: Nine test fields with small fields (0.6-3 cm side length) surrounded by large MLC shielded areas (secondary collimator 12 Multiplication-Sign 12 cm{sup 2}) were created using a 6 MV beam from a Varian Clinac iX linear accelerator with 120 leaf MLC. Measurements of output factors and profiles were performed using a diamond detector (PTW) and compared to two dose calculations algorithms anisotropic analytical algorithm [(AAA) and Acuros XB] implemented on a commercial radiotherapy treatment planning system (Varian Eclipse 10). Results: Both calculation algorithms predicted output factors within 1% for field sizes larger than 1 Multiplication-Sign 1 cm{sup 2}. For smaller fields AAA tended to underestimate the dose. Profiles were predicted well for all fields except for problems of Acuros XB to model the secondary penumbra between MLC shielded fields and the secondary collimator. A focal spot size of 1 mm or less, DLG 1.4 mm and MLC transmission of 1.4% provided a generally good model for our experimental setup. Conclusions: AAA and Acuros XB were found to predict the dose under small MLC defined field segments well. While DLG and focal spot affect mostly the penumbra, the choice of correct MLC transmission will be essential to model treatments such as VMAT accurately.

  3. On unusual temperature dependence of the upper critical field in YNi 2- xFe xB 2C

    NASA Astrophysics Data System (ADS)

    Kumary, T. Geetha; Kalavathi, S.; Valsakumar, M. C.; Hariharan, Y.; Radhakrishnan, T. S.

    1997-02-01

    Measurement of upper critica field in YNi 2- xFe xB 2C is reported for x = 0, 0.05, 0.10, and 0.15. An anomalous positive curvature is observed for a range of temperatures close to Tc, for all x. As x is increased, the temperature interval over which the curvature in Hc2( T) is positive, is reduced and the system shows a tendency to go to the usual behaviour exhibited by conventional low temperature superconductors. Most of the theories based on a Fermi liquid normal state seem to be inadequate to understand this anomalous behaviour. It is speculated that this anomalous behaviour of Hc2( T) signifies the presence of strong correlations in the pristine YNi 2B 2C and that strong correlation effects become less and less important upon substitution of Ni with Fe.

  4. Phosphatidylinositol 3-Kinase-Associated Protein (PI3KAP)/XB130 Crosslinks Actin Filaments through Its Actin Binding and Multimerization Properties In Vitro and Enhances Endocytosis in HEK293 Cells.

    PubMed

    Yamanaka, Daisuke; Akama, Takeshi; Chida, Kazuhiro; Minami, Shiro; Ito, Koichi; Hakuno, Fumihiko; Takahashi, Shin-Ichiro

    2016-01-01

    Actin-crosslinking proteins control actin filament networks and bundles and contribute to various cellular functions including regulation of cell migration, cell morphology, and endocytosis. Phosphatidylinositol 3-kinase-associated protein (PI3KAP)/XB130 has been reported to be localized to actin filaments (F-actin) and required for cell migration in thyroid carcinoma cells. Here, we show a role for PI3KAP/XB130 as an actin-crosslinking protein. First, we found that the carboxyl terminal region of PI3KAP/XB130 containing amino acid residues 830-840 was required and sufficient for localization to F-actin in NIH3T3 cells, and this region is directly bound to F-actin in vitro. Moreover, actin-crosslinking assay revealed that recombinant PI3KAP/XB130 crosslinked F-actin. In general, actin-crosslinking proteins often multimerize to assemble multiple actin-binding sites. We then investigated whether PI3KAP/XB130 could form a multimer. Blue native-PAGE analysis showed that recombinant PI3KAP/XB130 was detected at 250-1200 kDa although the molecular mass was approximately 125 kDa, suggesting that PI3KAP/XB130 formed multimers. Furthermore, we found that the amino terminal 40 amino acids were required for this multimerization by co-immunoprecipitation assay in HEK293T cells. Deletion mutants of PI3KAP/XB130 lacking the actin-binding region or the multimerizing region did not crosslink actin filaments, indicating that actin binding and multimerization of PI3KAP/XB130 were necessary to crosslink F-actin. Finally, we examined roles of PI3KAP/XB130 on endocytosis, an actin-related biological process. Overexpression of PI3KAP/XB130 enhanced dextran uptake in HEK 293 cells. However, most of the cells transfected with the deletion mutant lacking the actin-binding region incorporated dextran to a similar extent as control cells. Taken together, these results demonstrate that PI3KAP/XB130 crosslinks F-actin through both its actin-binding region and multimerizing region and plays

  5. Genetic diversity of VAR2CSA ID1-DBL2Xb in worldwide Plasmodium falciparum populations: impact on vaccine design for placental malaria.

    PubMed

    Bordbar, Bita; Tuikue Ndam, Nicaise; Renard, Emmanuelle; Jafari-Guemouri, Sayeh; Tavul, Livingstone; Jennison, Charlie; Gnidehou, Sédami; Tahar, Rachida; Gamboa, Dionicia; Bendezu, Jorge; Menard, Didier; Barry, Alyssa E; Deloron, Philippe; Sabbagh, Audrey

    2014-07-01

    In placental malaria (PM), sequestration of infected erythrocytes in the placenta is mediated by an interaction between VAR2CSA, a Plasmodium falciparum protein expressed on erythrocytes, and chondroitin sulfate A (CSA) on syncytiotrophoblasts. Recent works have identified ID1-DBL2Xb as the minimal CSA-binding region within VAR2CSA able to induce strong protective immunity, making it the leading candidate for the development of a vaccine against PM. Assessing the existence of population differences in the distribution of ID1-DBL2Xb polymorphisms is of paramount importance to determine whether geographic diversity must be considered when designing a candidate vaccine based on this fragment. In this study, we examined patterns of sequence variation of ID1-DBL2Xb in a large collection of P. falciparum field isolates (n=247) from different malaria-endemic areas, including Africa (Benin, Senegal, Cameroon and Madagascar), Asia (Cambodia), Oceania (Papua New Guinea), and Latin America (Peru). Detection of variants and estimation of their allele frequencies were performed using next-generation sequencing of DNA pools. A considerable amount of variation was detected along the whole gene segment, suggesting that several allelic variants may need to be included in a candidate vaccine to achieve broad population coverage. However, most sequence variants were common and extensively shared among worldwide parasite populations, demonstrating long term persistence of those polymorphisms, probably maintained through balancing selection. Therefore, a vaccine mixture including such stable antigen variants will be putatively applicable and efficacious in all world regions where malaria occurs. Despite similarity in ID1-DBL2Xb allele repertoire across geographic areas, several peaks of strong population differentiation were observed at specific polymorphic loci, pointing out putative targets of humoral immunity subject to positive immune selection.

  6. Altitude-Wind-Tunnel Investigation of the 19B-2, 19B-8 and 19XB-1 Jet- Propulsion Engines. 4; Analysis of Compressor Performance

    NASA Technical Reports Server (NTRS)

    Dietz, Robert O.; Kuenzig, John K.

    1947-01-01

    Investigations were conducted in the Cleveland altitude wind tunnel to determine the performance and operational characteristics of the 19B-2, 19B-8, and 19XS-1 turbojet engines. One objective was to determine the effect of altitude, flight Mach number, and tail-pipe-nozzle area on the performance characteristics of the six-stage and ten-stage axial-flow compressors of the 19B-8 and 19XB-1 engines, respectively, The data were obtained over a range of simulated altitudes and flight Mach numbers. At each simulated flight condition the engine was run over its full operable range of speeds. Performance characteristics of the 19B-8 and 19XB-1 compressors for the range of operation obtainable in the turboJet-engine installation are presented. Compressor characteristics are presented as functions of air flow corrected to sea-level conditions, compressor Mach number, and compressor load coefficient. For the range of compressor operation investigated, changes in Reynolds number had no measurable effect on the relations among compressor Mach number, corrected air flow, compressor load coefficient, compressor pressure ratio, and compressor efficiency. The operating lines for the 19B-8 compressor lay on the low-air-flow side of the region of maximum compressor efficiency; the 19B-8 compressor operated at higher average pressure coefficients per stage and produced a lower over-all pressure ratio than did the 19XB-1 compressor.

  7. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  8. Search for the Xb and other hidden-beauty states in the π+π- ϒ (1 S) channel at ATLAS

    NASA Astrophysics Data System (ADS)

    Aad, G.; Abbott, B.; Abdallah, J.; Abdel Khalek, S.; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Agatonovic-Jovin, T.; Aguilar-Saavedra, J. A.; Agustoni, M.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alio, L.; Alison, J.; Allbrooke, B. M. M.; Allison, L. J.; Allport, P. P.; Almond, J.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Altheimer, A.; Alvarez Gonzalez, B.; Alviggi, M. G.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnal, V.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Avolio, G.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Baas, A. E.; Bacci, C.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Backus Mayes, J.; Badescu, E.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Balek, P.; Balli, F.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Bansal, V.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Bartsch, V.; Bassalat, A.; Basye, A.; Bates, R. L.; Batley, J. R.; Battaglia, M.; Battistin, M.; Bauer, F.; Bawa, H. S.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, S.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bedikian, S.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, K.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez Garcia, J. A.; Benjamin, D. P.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Beringer, J.; Bernard, C.; Bernat, P.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertsche, C.; Bertsche, D.; Besana, M. I.; Besjes, G. J.; Bessidskaia, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Bierwagen, K.; Biesiada, J.; Biglietti, M.; Bilbao De Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boddy, C. R.; Boehler, M.; Boek, T. T.; Bogaerts, J. A.; Bogdanchikov, A. G.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Borri, M.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutouil, S.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozic, I.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brazzale, S. F.; Brelier, B.; Brendlinger, K.; Brennan, A. J.; Brenner, R.; Bressler, S.; Bristow, K.; Bristow, T. M.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Bromberg, C.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Brown, J.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bryngemark, L.; Buanes, T.; Buat, Q.; Bucci, F.; Buchholz, P.; Buckingham, R. M.; Buckley, A. G.; Buda, S. I.; Budagov, I. A.; Buehrer, F.; Bugge, L.; Bugge, M. K.; Bulekov, O.; Bundock, A. C.; Burckhart, H.; Burdin, S.; Burghgrave, B.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Buszello, C. P.; Butler, B.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Byszewski, M.; Cabrera Urbán, S.; Caforio, D.; Cakir, O.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L. P.; Calvet, D.; Calvet, S.; Camacho Toro, R.; Camarda, S.; Cameron, D.; Caminada, L. M.; Caminal Armadans, R.; Campana, S.; Campanelli, M.; Campoverde, A.; Canale, V.; Canepa, A.; Cano Bret, M.; Cantero, J.; Cantrill, R.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Castaneda-Miranda, E.; Castelli, A.; Castillo Gimenez, V.; Castro, N. F.; Catastini, P.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Cattani, G.; Caudron, J.; Cavaliere, V.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerio, B. C.; Cerny, K.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chalupkova, I.; Chang, P.; Chapleau, B.; Chapman, J. D.; Charfeddine, D.; Charlton, D. G.; Chau, C. C.; Chavez Barajas, C. A.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, L.; Chen, S.; Chen, X.; Chen, Y.; Chen, Y.; Cheng, H. C.; Cheng, Y.; Cheplakov, A.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiefari, G.; Childers, J. T.; Chilingarov, A.; Chiodini, G.; Chisholm, A. S.; Chislett, R. T.; Chitan, A.; Chizhov, M. V.; Chouridou, S.; Chow, B. K. B.; Chromek-Burckhart, D.; Chu, M. L.; Chudoba, J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciocio, A.; Cirkovic, P.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, P. J.; Clarke, R. N.; Cleland, W.; Clemens, J. C.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Cogan, J. G.; Coggeshall, J.; Cole, B.; Cole, S.; Colijn, A. P.; Collot, J.; Colombo, T.; Colon, G.; Compostella, G.; Conde Muiño, P.; Coniavitis, E.; Conidi, M. C.; Connell, S. H.; Connelly, I. A.; Consonni, S. M.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cooper-Smith, N. J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Côté, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Crispin Ortuzar, M.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cuciuc, C.-M.; Cuhadar Donszelmann, T.; Cummings, J.; Curatolo, M.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dafinca, A.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Daniells, A. C.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, E.; Davies, M.; Davignon, O.; Davison, A. R.; Davison, P.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Nooij, L.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dechenaux, B.; Dedovich, D. V.; Deigaard, I.; Del Peso, J.; Del Prete, T.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Mattia, A.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Dietzsch, T. A.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Do Valle Wemans, A.; Dobos, D.; Doglioni, C.; Doherty, T.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Dris, M.; Dubbert, J.; Dube, S.; Dubreuil, E.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudziak, F.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Dwuznik, M.; Dyndal, M.; Ebke, J.; Edson, W.; Edwards, N. C.; Ehrenfeld, W.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Engelmann, R.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ernis, G.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Favareto, A.; Fayard, L.; Federic, P.; Fedin, O. L.; Fedorko, W.; Fehling-Kaschek, M.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Fernandez Perez, S.; Ferrag, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, J.; Fisher, W. C.; Fitzgerald, E. A.; Flechl, M.; Fleck, I.; Fleischmann, P.; Fleischmann, S.; Fletcher, G. T.; Fletcher, G.; Flick, T.; Floderus, A.; Flores Castillo, L. R.; Florez Bustos, A. C.; Flowerdew, M. J.; Formica, A.; Forti, A.; Fortin, D.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franconi, L.; Franklin, M.; Franz, S.; Fraternali, M.; French, S. T.; Friedrich, C.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fulsom, B. G.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallo, V.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y. S.; Garay Walls, F. M.; Garberson, F.; García, C.; García Navarro, J. E.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gatti, C.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Ge, P.; Gecse, Z.; Gee, C. N. P.; Geerts, D. A. A.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Gemmell, A.; Genest, M. H.; Gentile, S.; George, M.; George, S.; Gerbaudo, D.; Gershon, A.; Ghazlane, H.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giangiobbe, V.; Giannetti, P.; Gianotti, F.; Gibbard, B.; Gibson, S. M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giordano, R.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giugni, D.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Glonti, G. L.; Goblirsch-Kolb, M.; Goddard, J. R.; Godlewski, J.; Goeringer, C.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gomez Fajardo, L. S.; Gonçalo, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, L.; González de la Hoz, S.; Gonzalez Parra, G.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gouighri, M.; Goujdami, D.; Goulette, M. P.; Goussiou, A. G.; Goy, C.; Gozpinar, S.; Grabas, H. M. X.; Graber, L.; Grabowska-Bold, I.; Grafström, P.; Grahn, K.-J.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Gray, H. M.; Graziani, E.; Grebenyuk, O. G.; Greenwood, Z. D.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grishkevich, Y. V.; Grivaz, J.-F.; Grohs, J. P.; Grohsjean, A.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Groth-Jensen, J.; Grout, Z. J.; Guan, L.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guicheney, C.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Gupta, S.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guttman, N.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Hall, D.; Halladjian, G.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamer, M.; Hamilton, A.; Hamilton, S.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harper, D.; Harrington, R. D.; Harris, O. M.; Harrison, P. F.; Hartjes, F.; Hasegawa, M.; Hasegawa, S.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayashi, T.; Hayden, D.; Hays, C. P.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, L.; Hejbal, J.; Helary, L.; Heller, C.; Heller, M.; Hellman, S.; Hellmich, D.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Hengler, C.; Henrichs, A.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Hernández Jiménez, Y.; Herrberg-Schubert, R.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillert, S.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hoffman, J.; Hoffmann, D.; Hohlfeld, M.; Holmes, T. R.; Hong, T. M.; Hooft van Huysduynen, L.; Hopkins, W. H.; Horii, Y.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, X.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Hurwitz, M.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikematsu, K.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Inamaru, Y.; Ince, T.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Irles Quiles, A.; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Iturbe Ponce, J. M.; Iuppa, R.; Ivarsson, J.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jackson, B.; Jackson, M.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jakubek, J.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansen, H.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Joergensen, M. D.; Johansson, K. E.; Johansson, P.; Johns, K. A.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Joshi, K. D.; Jovicevic, J.; Ju, X.; Jung, C. A.; Jungst, R. M.; Jussel, P.; Juste Rozas, A.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kajomovitz, E.; Kalderon, C. W.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneda, M.; Kaneti, S.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kar, D.; Karakostas, K.; Karastathis, N.; Kareem, M. J.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kasieczka, G.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Katre, A.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Kazarinov, M. Y.; Keeler, R.; Kehoe, R.; Keil, M.; Keller, J. S.; Kempster, J. J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Kessoku, K.; Keung, J.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Khodinov, A.; Khomich, A.; Khoo, T. J.; Khoriauli, G.; Khoroshilov, A.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H. Y.; Kim, H.; Kim, S. H.; Kimura, N.; Kind, O.; King, B. T.; King, M.; King, R. S. B.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kittelmann, T.; Kiuchi, K.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Klok, P. F.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Kohlmann, S.; Kohout, Z.; Kohriki, T.; Koi, T.; Kolanoski, H.; Koletsou, I.; Koll, J.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; König, S.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Korotkov, V. A.; Kortner, O.; Kortner, S.; Kostyukhin, V. V.; Kotov, V. M.; Kotwal, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kral, V.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kreiss, S.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Kruker, T.; Krumnack, N.; Krumshteyn, Z. V.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuehn, S.; Kugel, A.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kurumida, R.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; La Rosa, A.; La Rotonda, L.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Laier, H.; Lambourne, L.; Lammers, S.; Lampen, C. L.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, H.; Lee, J. S. H.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmacher, M.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Leroy, C.; Lester, C. G.; Lester, C. M.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, A.; Lewis, G. H.; Leyko, A. M.; Leyton, M.; Li, B.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, S.; Li, Y.; Liang, Z.; Liao, H.; Liberti, B.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Lin, S. C.; Lin, T. H.; Linde, F.; Lindquist, B. E.; Linnemann, J. T.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y.; Livan, M.; Livermore, S. S. A.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo Sterzo, F.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Lombardo, V. P.; Long, B. A.; Long, J. D.; Long, R. E.; Lopes, L.; Lopez Mateos, D.; Lopez Paredes, B.; Lopez Paz, I.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Loscutoff, P.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lowe, A. J.; Lu, F.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lungwitz, M.; Lynn, D.; Lysak, R.; Lytken, E.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; Machado Miguens, J.; Macina, D.; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeno, M.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahboubi, K.; Mahlstedt, J.; Mahmoud, S.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Mal, P.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyshev, V. M.; Malyukov, S.; Mamuzic, J.; Mandelli, B.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Manfredini, A.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J. A.; Mann, A.; Manning, P. M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mantifel, R.; Mapelli, L.; March, L.; Marchand, J. F.; Marchiori, G.; Marcisovsky, M.; Marino, C. P.; Marjanovic, M.; Marques, C. N.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, B.; Martin, T. A.; Martin, V. J.; Martin dit Latour, B.; Martinez, H.; Martinez, M.; Martin-Haugh, S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Massol, N.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazzaferro, L.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McCubbin, N. A.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Mechnich, J.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B. R.; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Meric, N.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Merritt, H.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Middleton, R. P.; Migas, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Milstein, D.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mirabelli, G.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Mitsui, S.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Mönig, K.; Monini, C.; Monk, J.; Monnier, E.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morgenstern, M.; Morii, M.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morvaj, L.; Moser, H. G.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, K.; Mueller, T.; Mueller, T.; Muenstermann, D.; Munwes, Y.; Murillo Quijada, J. A.; Murray, W. J.; Musheghyan, H.; Musto, E.; Myagkov, A. G.; Myska, M.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagai, Y.; Nagano, K.; Nagarkar, A.; Nagasaka, Y.; Nagel, M.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Nanava, G.; Narayan, R.; Nattermann, T.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Nef, P. D.; Negri, A.; Negri, G.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nelson, T. K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforou, N.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolics, K.; Nikolopoulos, K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Norberg, S.; Nordberg, M.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; Nuti, F.; O'Brien, B. J.; O'grady, F.; O'Neil, D. C.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, M. I.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Okamura, W.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Olchevski, A. G.; Olivares Pino, S. A.; Oliveira Damazio, D.; Oliver Garcia, E.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Oropeza Barrera, C.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ouellette, E. A.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Padilla Aranda, C.; Pagáčová, M.; Pagan Griso, S.; Paganis, E.; Pahl, C.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Palmer, J. D.; Pan, Y. B.; Panagiotopoulou, E.; Panduro Vazquez, J. G.; Pani, P.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, M. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passaggio, S.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Patricelli, S.; Pauly, T.; Pearce, J.; Pedersen, L. E.; Pedersen, M.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Peng, H.; Penning, B.; Penwell, J.; Perepelitsa, D. V.; Perez Codina, E.; Pérez García-Estañ, M. T.; Perez Reale, V.; Perini, L.; Pernegger, H.; Perrella, S.; Perrino, R.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petrolo, E.; Petrucci, F.; Pettersson, N. E.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Piegaia, R.; Pignotti, D. T.; Pilcher, J. E.; Pilkington, A. D.; Pina, J.; Pinamonti, M.; Pinder, A.; Pinfold, J. L.; Pingel, A.; Pinto, B.; Pires, S.; Pitt, M.; Pizio, C.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Poddar, S.; Podlyski, F.; Poettgen, R.; Poggioli, L.; Pohl, D.; Pohl, M.; Polesello, G.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Portell Bueso, X.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Pralavorio, P.; Pranko, A.; Prasad, S.; Pravahan, R.; Prell, S.; Price, D.; Price, J.; Price, L. E.; Prieur, D.; Primavera, M.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopapadaki, E.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Przysiezniak, H.; Ptacek, E.; Puddu, D.; Pueschel, E.; Puldon, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Qureshi, A.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Rajagopalan, S.; Rammensee, M.; Randle-Conde, A. S.; Rangel-Smith, C.; Rao, K.; Rauscher, F.; Rave, T. C.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Readioff, N. P.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reisin, H.; Relich, M.; Rembser, C.; Ren, H.; Ren, Z. L.; Renaud, A.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Ridel, M.; Rieck, P.; Rieger, J.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Rodrigues, L.; Roe, S.; Røhne, O.; Rolli, S.; Romaniouk, A.; Romano, M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, M.; Rose, P.; Rosendahl, P. L.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, C.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryder, N. C.; Saavedra, A. F.; Sacerdoti, S.; Saddique, A.; Sadeh, I.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Saleem, M.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Sanchez Martinez, V.; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, T.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sartisohn, G.; Sasaki, O.; Sasaki, Y.; Sauvage, G.; Sauvan, E.; Savard, P.; Savu, D. O.; Sawyer, C.; Sawyer, L.; Saxon, D. H.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaefer, R.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Scherzer, M. I.; Schiavi, C.; Schieck, J.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schorlemmer, A. L. S.; Schott, M.; Schouten, D.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schroeder, C.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwarz, T. A.; Schwegler, Ph.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Schwoerer, M.; Sciacca, F. G.; Scifo, E.; Sciolla, G.; Scott, W. G.; Scuri, F.; Scutti, F.; Searcy, J.; Sedov, G.; Sedykh, E.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekula, S. J.; Selbach, K. E.; Seliverstov, D. M.; Sellers, G.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Serre, T.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Shochet, M. J.; Short, D.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Shushkevich, S.; Sicho, P.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simoniello, R.; Simonyan, M.; Sinervo, P.; Sinev, N. B.; Sipica, V.; Siragusa, G.; Sircar, A.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skottowe, H. P.; Skovpen, K. Yu.; Skubic, P.; Slater, M.; Slavicek, T.; Sliwa, K.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, K. M.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Solans, C. A.; Solar, M.; Solc, J.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Song, H. Y.; Soni, N.; Sood, A.; Sopczak, A.; Sopko, B.; Sopko, V.; Sorin, V.; Sosebee, M.; Soualah, R.; Soueid, P.; Soukharev, A. M.; South, D.; Spagnolo, S.; Spanò, F.; Spearman, W. R.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; Spreitzer, T.; Spurlock, B.; St. Denis, R. D.; Staerz, S.; Stahlman, J.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staszewski, R.; Stavina, P.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stern, S.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, E.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramaniam, R.; Succurro, A.; Sugaya, Y.; Suhr, C.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, Y.; Svatos, M.; Swedish, S.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tanasijczuk, A. J.; Tannenwald, B. B.; Tannoury, N.; Tapprogge, S.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, F. E.; Taylor, G. N.; Taylor, W.; Teischinger, F. A.; Teixeira Dias Castanheira, M.; Teixeira-Dias, P.; Temming, K. K.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Therhaag, J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, P. D.; Thompson, R. J.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Thong, W. M.; Thun, R. P.; Tian, F.; Tibbetts, M. J.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tiouchichine, E.; Tipton, P.; Tisserant, S.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Topilin, N. D.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Tran, H. L.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; True, P.; Trzebinski, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turk Cakir, I.; Turra, R.; Tuts, P. M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Uchida, K.; Ueda, I.; Ueno, R.; Ughetto, M.; Ugland, M.; Uhlenbrock, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urbaniec, D.; Urquijo, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J. A.; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; Van Der Leeuw, R.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veloso, F.; Velz, T.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigne, R.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Virzi, J.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vladoiu, D.; Vlasak, M.; Vogel, A.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Waller, P.; Walsh, B.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Warsinsky, M.; Washbrook, A.; Wasicki, C.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weigell, P.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wendland, D.; Weng, Z.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Wicke, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wijeratne, P. A.; Wildauer, A.; Wildt, M. A.; Wilkens, H. G.; Will, J. Z.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, A.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winter, B. T.; Wittgen, M.; Wittig, T.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wright, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wulf, E.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xiao, M.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yakabe, R.; Yamada, M.; Yamaguchi, H.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, U. K.; Yang, Y.; Yanush, S.; Yao, L.; Yao, W.-M.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yurkewicz, A.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zevi della Porta, G.; Zhang, D.; Zhang, F.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, X.; Zhang, Z.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, L.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Zinonos, Z.; Ziolkowski, M.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zurzolo, G.; Zutshi, V.; Zwalinski, L.

    2015-01-01

    This Letter presents a search for a hidden-beauty counterpart of the X (3872) in the mass ranges of 10.05-10.31 GeV and 10.40-11.00 GeV, in the channel Xb →π+π- ϒ (1 S) (→μ+μ-), using 16.2 fb-1 of √{ s} = 8 TeVpp collision data collected by the ATLAS detector at the LHC. No evidence for new narrow states is found, and upper limits are set on the product of the Xb cross section and branching fraction, relative to those of the ϒ (2S), at the 95% confidence level using the CLS approach. These limits range from 0.8% to 4.0%, depending on mass. For masses above 10.1 GeV, the expected upper limits from this analysis are the most restrictive to date. Searches for production of the ϒ (13DJ), ϒ (10 860), and ϒ (11 020) states also reveal no significant signals.

  9. Excitation of ion Bernstein waves as the dominant parametric decay channel in direct X-B mode conversion for typical spherical torus

    NASA Astrophysics Data System (ADS)

    Abbasi, Mustafa; Sadeghi, Yahya; Sobhanian, Samad; Asgarian, Mohammad Ali

    2016-03-01

    The electron Bernstein wave (EBW) is typically the only wave in the electron cyclotron (EC) range that can be applied in spherical tokamaks for heating and current drive (H&CD). Spherical tokamaks (STs) operate generally in high- β regimes, in which the usual EC ordinary (O) and extraordinary (X) modes are cut off. As it was recently investigated the existence of EBWs at nonlinear regime thus the next step would be the probable nonlinear phenomena study which are predicted to be occurred within the high levels of injected power. In this regard, parametric instabilities are considered as the major channels for losses at the X-B conversion. Hence, we have to consider their effects at the UHR region which can reduce the X-B conversion efficiency. In the case of EBW heating (EBH) at high power density, the nonlinear effects can arise. Particularly at the UHR position, the group velocity is strongly reduced, which creates a high energy density and subsequently a high amplitude electric field. Therefore, a part of the input wave can decay into daughter waves via parametric instability (PI). Thus, via the present research, the excitations of ion Bernstein waves as the dominant decay channels are investigated and also an estimate for the threshold power in terms of experimental parameters related to the fundamental mode of instability is proposed.

  10. Characterization of a Severe Parenchymal Phenotype of Experimental Autoimmune Encephalomyelitis in (C57BL6xB10.PL)F1 Mice

    PubMed Central

    Carrithers, Michael D.; Carrithers, Lisette M.; Czyzyk, Jan; Henegariu, Octavian

    2009-01-01

    We here describe a novel CD4 T cell adoptive transfer model of severe experimental autoimmune encephalomyelitis in (C57BL6xB10.PL)F1 mice. This FI cross developed severe disease characterized by extensive parenchymal spinal cord and brain periventricular white matter infiltrates. In contrast, B10.PL mice developed mild disease characterized by meningeal predominant infiltrates. As determined by cDNA microarray and quantitative real time PCR expression analysis, histologic and flow cytometry analysis of inflammatory infiltrates, and attenuation of disease in class I-deficient and CD8-depleted F1 mice; this severe disease phenotype appears to be regulated by CNS infiltration of CD8 T lymphocytes early in the disease course. PMID:17512611

  11. Tunable deep ultraviolet single-longitudinal-mode laser generated with Ba(1-x)B(2-y-z)O4Si(x)Al(y)Ga(z) crystal.

    PubMed

    Wang, Rui; Teng, Hao; Wang, Nan; Han, Hainian; Wang, Zhaohua; Wei, Zhiyi; Hong, Maochun; Lin, Wenxiong

    2014-04-01

    We report a new nonlinear crystal, Ba(1-x)B(2-y-z)O4Si(x)Al(y)Ga(z), and employ it to a compact 1 kHz single-longitudinal-mode Ti:Sapphire master oscillator power amplifier system for fourth harmonic generation. A maximum output power of 130 mW is obtained in the tunable range of 195-205 nm with linewidth of less than 0.1 pm.

  12. Preparation and properties of a new ternary phase Mg3+xNi7-xB2 (0.17≤x≤0.66) and its Cu-doping effect

    NASA Astrophysics Data System (ADS)

    Liao, Chang-Zhong; Dong, Cheng; Shih, Kaimin; Zeng, Lingmin; He, Bing; Cao, Wenhuan; Yang, Lihong

    2015-03-01

    In recent years, the materials in the B-Mg-Ni system have been intensively studied due to their excellent properties of hydrogen storage and superconductivity. Solving the crystal structure of phases in this system will facilitate an understanding of the mechanism of their physical properties. In this study, we report the preparation, crystal structure and physical properties of a new ternary phase Mg3+xNi7-xB2 in the B-Mg-Ni system. The Mg3+xNi7-xB2 phase was prepared by solid-state reactions at 1073 K and its crystal structure was determined and refined using X-ray powder diffraction data. The Mg3+xNi7-xB2 phase crystallizes in the Ca3Ni7B2 structure type (space group R-3m, no. 166) with a=4.9496(3)-5.0105(6) Å, c=20.480(1)-20.581(1) Å depending on the x value, where x varies from 0.17 to 0.66. Two samples with nominal compositions Mg10Ni20B6 and Mg12Ni18B6 were characterized by magnetization and electric resistivity measurements in the temperature range from 5 K to room temperature. Both samples exhibited metallic behavior and showed spin-glass-like behavior with a spin freezing temperature (Tf) around 33 K. A study of the Cu-doping effect showed that limited Cu content can be doped into the Mg3+xNi7-xB2 compound and Tf decreases as the Cu content increases.

  13. Competing anisotropies on 3d sub-lattice of YNi{sub 4–x}Co{sub x}B compounds

    SciTech Connect

    Caraballo Vivas, R. J.; Rocco, D. L.; Reis, M. S.; Caldeira, L.; Coelho, A. A.

    2014-08-14

    The magnetic anisotropy of 3d sub-lattices has an important rule on the overall magnetic properties of hard magnets. Intermetallics alloys with boron (R-Co/Ni-B, for instance) belong to those hard magnets family and are useful objects to help to understand the magnetic behavior of 3d sub-lattice, specially when the rare earth ions R do not have magnetic nature, like YCo{sub 4}B ferromagnetic material. Interestingly, YNi{sub 4}B is a paramagnetic material and Ni ions do not contribute to the magnetic anisotropy. We focused therefore our attention to YNi{sub 4–x}Co{sub x}B series, with x = 0, 1, 2, 3, and 4. The magnetic anisotropy of these compounds is deeper described using statistical and preferential models of Co occupation among the possible Wyckoff positions into the CeCo{sub 4}B type hexagonal structure. We found that the preferential model is the most suitable to explain the magnetization experimental data.

  14. Equivalent Longitudinal Area Distributions of the B-58 and XB-70-1 Airplanes for Use in Wave Drag and Sonic Boom Calculations

    NASA Technical Reports Server (NTRS)

    Tinetti, Ana F.; Maglieri, Domenic J.; Driver, Cornelius; Bobbitt, Percy J.

    2011-01-01

    A detailed geometric description, in wave drag format, has been developed for the Convair B-58 and North American XB-70-1 delta wing airplanes. These descriptions have been placed on electronic files, the contents of which are described in this paper They are intended for use in wave drag and sonic boom calculations. Included in the electronic file and in the present paper are photographs and 3-view drawings of the two airplanes, tabulated geometric descriptions of each vehicle and its components, and comparisons of the electronic file outputs with existing data. The comparisons include a pictorial of the two airplanes based on the present geometric descriptions, and cross-sectional area distributions for both the normal Mach cuts and oblique Mach cuts above and below the vehicles. Good correlation exists between the area distributions generated in the late 1950s and 1960s and the present files. The availability of these electronic files facilitates further validation of sonic boom prediction codes through the use of two existing data bases on these airplanes, which were acquired in the 1960s and have not been fully exploited.

  15. Quantum mechanically guided design of Co43Fe20Ta(5.5)X(31.5) (X=B, Si, P, S) metallic glasses.

    PubMed

    Hostert, C; Music, D; Bednarcik, J; Keckes, J; Schneider, J M

    2012-05-01

    A systematic ab initio molecular dynamics study was carried out to identify valence electron concentration and size induced changes on structure, elastic and magnetic properties for Co(43)Fe(20)Ta(5.5)X(31.5) (X=B, Si, P, S). Short range order, charge transfer and the bonding nature are analyzed by means of density of states, Bader decomposition and pair distribution function analysis. A clear trend of a decrease in density and bulk modulus as well as a weaker cohesion was observed as the valence electron concentration is increased by replacing B with Si and further with P and S. These changes may be understood based on increased interatomic distances, variations in coordination numbers and the electronic structure changes; as the valence electron concentration of X is increased the X bonding becomes more ionic, which disrupts the overall metallic interactions, leading to lower cohesion and stiffness. The highest magnetic moments for the transition metals are identified for X=S, despite the fact that the presence of X generally reduces the magnetic moment of Co. Furthermore, this study reveals an extended diagonal relationship between B and P within these amorphous alloys. Based on quantum mechanical data we identify composition induced changes in short range order, charge transfer and bonding nature and link them to density, elasticity and magnetism. The interplay between transition metal d band filling and s-d hybridization was identified to be a key materials design criterion.

  16. A 2.15 hr ORBITAL PERIOD FOR THE LOW-MASS X-RAY BINARY XB 1832-330 IN THE GLOBULAR CLUSTER NGC 6652

    SciTech Connect

    Engel, M. C.; Heinke, C. O.; Sivakoff, G. R.; Elshamouty, K. G.; Edmonds, P. D. E-mail: heinke@ualberta.ca

    2012-03-10

    We present a candidate orbital period for the low-mass X-ray binary (LMXB) XB 1832-330 in the globular cluster NGC 6652 using a 6.5 hr Gemini South observation of the optical counterpart of the system. Light curves in g' and r' for two LMXBs in the cluster, sources A and B in previous literature, were extracted and analyzed for periodicity using the ISIS image subtraction package. A clear sinusoidal modulation is evident in both of A's curves, of amplitude {approx}0.11 mag in g' and {approx}0.065 mag in r', while B's curves exhibit rapid flickering, of amplitude {approx}1 mag in g' and {approx}0.5 mag in r'. A Lomb-Scargle test revealed a 2.15 hr periodic variation in the magnitude of A with a false alarm probability less than 10{sup -11}, and no significant periodicity in the light curve for B. Though it is possible that saturated stars in the vicinity of our sources partially contaminated our signal, the identification of A's binary period is nonetheless robust.

  17. Performance of 19XB-2A Gas Turbine. 1; Effect of Pressure Ratio and Inlet Pressure on Turbine Performance for an Inlet Temperature of 800 degree R

    NASA Technical Reports Server (NTRS)

    Kohl, Robert C.; Larkin, Robert G.

    1946-01-01

    An investigation of the 19XB-2A gas turbine is being conducted at the Cleveland laboratory to determine the effect on turbine performance of various inlet pressures, inlet temperatures, pressure ratios, and wheel speeds. The engine of which this turbine is a component is designed to operate at an air flow of 30 pounds per second at a compressor rotor speed of 17,000 rpm at sea-level conditions. At these conditions the total-pressure ratio is 2.08 across the turbine and the turbine inlet total temperature is 2000 degrees R. Runs have been made with turbine inlet total pressures of 20, 30, 40, and 45 inches of mercury absolute for a constant total pressure ratio across the turbine of 2.40, the maximum value that could be obtained. Additional runs have been made with total pressure ratios of 1.50 and 2.00 at an inlet total pressure of 45 inches of mercury absolute. All runs were made with an inlet total temperature of 800 degrees R over a range of corrected turbine wheel speeds from 40 to 150 percent of the corrected speed at the design point. The turbine efficiencies at these conditions are presented.

  18. SWIFT REVEALS A ∼5.7 DAY SUPER-ORBITAL PERIOD IN THE M31 GLOBULAR CLUSTER X-RAY BINARY XB158

    SciTech Connect

    Barnard, R.; Garcia, M. R.; Murray, S. S.

    2015-03-01

    The M31 globular cluster X-ray binary XB158 (a.k.a. Bo 158) exhibits intensity dips on a 2.78 hr period in some observations, but not others. The short period suggests a low mass ratio, and an asymmetric, precessing disk due to additional tidal torques from the donor star since the disk crosses the 3:1 resonance. Previous theoretical three-dimensional smoothed particle hydrodynamical modeling suggested a super-orbital disk precession period 29 ± 1 times the orbital period, i.e., ∼81 ± 3 hr. We conducted a Swift monitoring campaign of 30 observations over ∼1 month in order to search for evidence of such a super-orbital period. Fitting the 0.3-10 keV Swift X-Ray Telescope luminosity light curve with a sinusoid yielded a period of 5.65 ± 0.05 days, and a >5σ improvement in χ{sup 2} over the best fit constant intensity model. A Lomb-Scargle periodogram revealed that periods of 5.4-5.8 days were detected at a >3σ level, with a peak at 5.6 days. We consider this strong evidence for a 5.65 day super-orbital period, ∼70% longer than the predicted period. The 0.3-10 keV luminosity varied by a factor of ∼5, consistent with variations seen in long-term monitoring from Chandra. We conclude that other X-ray binaries exhibiting similar long-term behavior are likely to also be X-ray binaries with low mass ratios and super-orbital periods.

  19. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  20. The electronic structure, mechanical and thermodynamic properties of Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides

    SciTech Connect

    He, TianWei; Jiang, YeHua E-mail: jfeng@seas.harvard.edu; Zhou, Rong; Feng, Jing E-mail: jfeng@seas.harvard.edu

    2015-08-21

    The mechanical properties, electronic structure and thermodynamic properties of the Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides were calculated by first-principles methods. The elastic constants show that these ternary borides are mechanically stable. Formation enthalpy of Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides are at the range of −118.09 kJ/mol to −40.14 kJ/mol. The electronic structures and chemical bonding characteristics are analyzed by the density of states. Mo{sub 2}FeB{sub 2} has the largest shear and Young's modulus because of its strong chemical bonding, and the values are 204.3 GPa and 500.3 GPa, respectively. MoCo{sub 2}B{sub 4} shows the lowest degree of anisotropy due to the lack of strong direction in the bonding. The Debye temperature of MoFe{sub 2}B{sub 4} is the largest among the six phases, which means that MoFe{sub 2}B{sub 4} possesses the best thermal conductivity. Enthalpy shows an approximately linear function of the temperature above 300 K. The entropy of these compounds increase rapidly when the temperature is below 450 K. The Gibbs free energy decreases with the increase in temperature. MoCo{sub 2}B{sub 4} has the lowest Gibbs free energy, which indicates the strongest formation ability in Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides.

  1. Sci—Thur AM: YIS - 05: 10X-FFF VMAT for Lung SABR: an Investigation of Peripheral Dose

    SciTech Connect

    Mader, J; Mestrovic, A

    2014-08-15

    Flattening Filter Free (FFF) beams exhibit high dose rates, reduced head scatter, leaf transmission and leakage radiation. For VMAT lung SABR, treatment time can be significantly reduced using high dose rate FFF beams while maintaining plan quality and accuracy. Another possible advantage offered by FFF beams for VMAT lung SABR is the reduction in peripheral dose. The focus of this study was to investigate and quantify the reduction of peripheral dose offered by FFF beams for VMAT lung SABR. The peripheral doses delivered by VMAT Lung SABR treatments using FFF and flattened beams were investigated for the Varian Truebeam linac. This study was conducted in three stages, (1): ion chamber measurement of peripheral dose for various plans, (2): validation of AAA, Acuros XB and Monte Carlo for peripheral dose using measured data, and (3): using the validated Monte Carlo model to evaluate peripheral doses for 6 VMAT lung SABR treatments. Three energies, 6X, 10X, and 10X-FFF were used for all stages. Measured data indicates that 10X-FFF delivers the lowest peripheral dose of the three energies studied. AAA and Acuros XB dose calculation algorithms were identified as inadequate, and Monte Carlo was validated for accurate peripheral dose prediction. The Monte Carlo-calculated VMAT lung SABR plans show a significant reduction in peripheral dose for 10X-FFF plans compared to the standard 6X plans, while no significant reduction was showed when compared to 10X. This reduction combined with shorter treatment time makes 10X-FFF beams the optimal choice for superior VMAT lung SABR treatments.

  2. Wind-tunnel/flight correlation study of aerodynamic characteristics of a large flexible supersonic cruise airplane (XB-70-1). 3: A comparison between characteristics predicted from wind-tunnel measurements and those measured in flight

    NASA Technical Reports Server (NTRS)

    Arnaiz, H. H.; Peterson, J. B., Jr.; Daugherty, J. C.

    1980-01-01

    A program was undertaken by NASA to evaluate the accuracy of a method for predicting the aerodynamic characteristics of large supersonic cruise airplanes. This program compared predicted and flight-measured lift, drag, angle of attack, and control surface deflection for the XB-70-1 airplane for 14 flight conditions with a Mach number range from 0.76 to 2.56. The predictions were derived from the wind-tunnel test data of a 0.03-scale model of the XB-70-1 airplane fabricated to represent the aeroelastically deformed shape at a 2.5 Mach number cruise condition. Corrections for shape variations at the other Mach numbers were included in the prediction. For most cases, differences between predicted and measured values were within the accuracy of the comparison. However, there were significant differences at transonic Mach numbers. At a Mach number of 1.06 differences were as large as 27 percent in the drag coefficients and 20 deg in the elevator deflections. A brief analysis indicated that a significant part of the difference between drag coefficients was due to the incorrect prediction of the control surface deflection required to trim the airplane.

  3. Synthesis and characterizations of water-based ferrofluids of substituted ferrites [Fe 1-xB xFe 2O 4, B=Mn, Co ( x=0-1)] for biomedical applications

    NASA Astrophysics Data System (ADS)

    Giri, Jyotsnendu; Pradhan, Pallab; Somani, Vaibhav; Chelawat, Hitesh; Chhatre, Shreerang; Banerjee, Rinti; Bahadur, Dhirendra

    Nanomagnetic particles have great potential in the biomedical applications like MRI contrast enhancement, magnetic separation, targeting delivery and hyperthermia. In this paper, we have explored the possibility of biomedical applications of [Fe 1-xB xFe 2O 4, B=Mn, Co] ferrite. Superparamagnetic particles of substituted ferrites [Fe 1-xB xFe 2O 4, B=Mn, Co ( x=0-1)] and their fatty acid coated water base ferrofluids have been successfully prepared by co-precipitation technique using NH4OH/TMAH (Tetramethylammonium hydroxide) as base. In vitro cytocompatibility study of different magnetic fluids was done using HeLa (human cervical carcinoma) cell lines. Co 2+-substituted ferrite systems (e.g. CoFe 2O 4) is more toxic than Mn 2+-substituted ferrite systems (e.g. MnFe 2O 4, Fe 0.6Mn 0.4Fe 2O 4). The later is as cytocompatible as Fe 3O 4. Thus, Fe 1-xMn xFe 2O 4 could be useful in biomedical applications like MRI contrast agent and hyperthermia treatment of cancer.

  4. Sol-gel preparation of boron-containing cordierite Mg{sub 2}(Al{sub 4-x}B {sub x})Si{sub 5}O{sub 18} and its crystallization

    SciTech Connect

    Hamzawy, Esmat M.A. . E-mail: ehamzawy@lycos.com; Ali, Ashraf F.

    2006-12-15

    Five cordierite-based powders were investigated regarding their thermal and crystallization behaviors. The powders were obtained from amorphous gels having nominal compositions of 2Mg : xAl : (4 - x)B : 5Si where x = 4 down to 0. Thermal gravimetry analysis of the dry gels showed some absorbed water and decomposition of organic ligands in addition to network condensation. Gradual substitution of B for Al in the dried gel powders showed a new band in their infrared spectra corresponding to triangular BO{sub 3}, whereas the bands corresponding to Al vanished. This also showed a noticeable effect on the crystallization trends, type and stability of cordierite. Cordierite crystallized in samples of B/Al ratio up to 1 while protoenstatite predominated in samples of higher B/Al ratios. In addition, some silica minerals, with little amorphous phase, were formed. Incorporation of boron and increase in temperature enhanced the transformation of {gamma} cordierite to its {alpha} form.

  5. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  6. National dosimetric audit network finds discrepancies in AAA lung inhomogeneity corrections.

    PubMed

    Dunn, Leon; Lehmann, Joerg; Lye, Jessica; Kenny, John; Kron, Tomas; Alves, Andrew; Cole, Andrew; Zifodya, Jackson; Williams, Ivan

    2015-07-01

    This work presents the Australian Clinical Dosimetry Service's (ACDS) findings of an investigation of systematic discrepancies between treatment planning system (TPS) calculated and measured audit doses. Specifically, a comparison between the Anisotropic Analytic Algorithm (AAA) and other common dose-calculation algorithms in regions downstream (≥2cm) from low-density material in anthropomorphic and slab phantom geometries is presented. Two measurement setups involving rectilinear slab-phantoms (ACDS Level II audit) and anthropomorphic geometries (ACDS Level III audit) were used in conjunction with ion chamber (planar 2D array and Farmer-type) measurements. Measured doses were compared to calculated doses for a variety of cases, with and without the presence of inhomogeneities and beam-modifiers in 71 audits. Results demonstrate a systematic AAA underdose with an average discrepancy of 2.9 ± 1.2% when the AAA algorithm is implemented in regions distal from lung-tissue interfaces, when lateral beams are used with anthropomorphic phantoms. This systemic discrepancy was found for all Level III audits of facilities using the AAA algorithm. This discrepancy is not seen when identical measurements are compared for other common dose-calculation algorithms (average discrepancy -0.4 ± 1.7%), including the Acuros XB algorithm also available with the Eclipse TPS. For slab phantom geometries (Level II audits), with similar measurement points downstream from inhomogeneities this discrepancy is also not seen. PMID:25921329

  7. A theoretical investigation of mixing thermodynamics, age-hardening potential, and electronic structure of ternary M11-xM2xB2 alloys with AlB2 type structure

    NASA Astrophysics Data System (ADS)

    Alling, B.; Högberg, H.; Armiento, R.; Rosen, J.; Hultman, L.

    2015-05-01

    Transition metal diborides are ceramic materials with potential applications as hard protective thin films and electrical contact materials. We investigate the possibility to obtain age hardening through isostructural clustering, including spinodal decomposition, or ordering-induced precipitation in ternary diboride alloys. By means of first-principles mixing thermodynamics calculations, 45 ternary M11-xM2xB2 alloys comprising MiB2 (Mi = Mg, Al, Sc, Y, Ti, Zr, Hf, V, Nb, Ta) with AlB2 type structure are studied. In particular Al1-xTixB2 is found to be of interest for coherent isostructural decomposition with a strong driving force for phase separation, while having almost concentration independent a and c lattice parameters. The results are explained by revealing the nature of the electronic structure in these alloys, and in particular, the origin of the pseudogap at EF in TiB2, ZrB2, and HfB2.

  8. Effects of the substitution of P2O5 by B2O3 on the structure and dielectric properties in (90-x) P2O5-xB2O3-10Fe2O3 glasses.

    PubMed

    Sdiri, N; Elhouichet, H; Dhaou, H; Mokhtar, F

    2014-01-01

    90%[xB2O3 (1-x) P2O5] 10%Fe2O3, glass systems where (x=0 mol%, 5 mol%, 10 mol%, 15 mol%, 20 mol%) was prepared via a melt quenching technique. The structure of glass is investigated at room temperature by, Raman and EPR spectroscopy. Raman studies have been performed on these glasses to examine the distribution of different borate and phosphate structural groups. We have noted an increase from 3 to 4 in the coordination number of the boron atoms from 3 to 4, i.e., the conversion of the BO3 triangular structural units into BO4 tetrahedra. The samples have been investigated by means of electron paramagnetic resonance (EPR). The results obtained from the gef=4.28 EPR line are typical of the occurrence of iron (III) occupying substitutional sites. Moreover, the dielectric sizes such as ε'(ω), ε″(ω), imaginary parts of the electrical modulus, M(*)(ω) and the loss tanδ, their variation with frequency at room temperature show a decrease in relaxation intensity with an increase in the concentration of (B2O3). On the present work, we have found a weak extinction index with our new glass.

  9. A theoretical investigation of mixing thermodynamics, age-hardening potential, and electronic structure of ternary M11–xM2xB2 alloys with AlB2 type structure

    PubMed Central

    Alling, B.; Högberg, H.; Armiento, R.; Rosen, J.; Hultman, L.

    2015-01-01

    Transition metal diborides are ceramic materials with potential applications as hard protective thin films and electrical contact materials. We investigate the possibility to obtain age hardening through isostructural clustering, including spinodal decomposition, or ordering-induced precipitation in ternary diboride alloys. By means of first-principles mixing thermodynamics calculations, 45 ternary M11–xM2xB2 alloys comprising MiB2 (Mi = Mg, Al, Sc, Y, Ti, Zr, Hf, V, Nb, Ta) with AlB2 type structure are studied. In particular Al1–xTixB2 is found to be of interest for coherent isostructural decomposition with a strong driving force for phase separation, while having almost concentration independent a and c lattice parameters. The results are explained by revealing the nature of the electronic structure in these alloys, and in particular, the origin of the pseudogap at EF in TiB2, ZrB2, and HfB2. PMID:25970763

  10. Observation of e+e- → π+π-π(0)(χbJ) and Search for X(b) → ωϒ(1S) at sqrt[s] = 10.867 GeV.

    PubMed

    He, X H; Shen, C P; Yuan, C Z; Ban, Y; Abdesselam, A; Adachi, I; Aihara, H; Asner, D M; Aulchenko, V; Aushev, T; Ayad, R; Bahinipati, S; Bakich, A M; Bansal, V; Bhuyan, B; Bondar, A; Bonvicini, G; Bozek, A; Bračko, M; Browder, T E; Cervenkov, D; Chang, P; Chekelian, V; Chen, A; Cheon, B G; Chilikin, K; Chistov, R; Cho, K; Chobanova, V; Choi, S-K; Choi, Y; Cinabro, D; Dalseno, J; Danilov, M; Doležal, Z; Drásal, Z; Drutskoy, A; Eidelman, S; Farhat, H; Fast, J E; Ferber, T; Gaur, V; Gabyshev, N; Ganguly, S; Garmash, A; Gillard, R; Glattauer, R; Goh, Y M; Grzymkowska, O; Haba, J; Hayasaka, K; Hayashii, H; Hou, W-S; Iijima, T; Ishikawa, A; Itoh, R; Iwasaki, Y; Jaegle, I; Joo, K K; Julius, T; Kato, E; Kawasaki, T; Kim, D Y; Kim, M J; Kim, Y J; Kinoshita, K; Ko, B R; Kodyš, P; Korpar, S; Križan, P; Krokovny, P; Kumita, T; Kuzmin, A; Kwon, Y-J; Lange, J S; Li, Y; Libby, J; Liventsev, D; Matvienko, D; Miyabayashi, K; Miyata, H; Mizuk, R; Mohanty, G B; Moll, A; Mussa, R; Nakano, E; Nakao, M; Nakazawa, H; Nanut, T; Natkaniec, Z; Nedelkovska, E; Nisar, N K; Nishida, S; Ogawa, S; Okuno, S; Pakhlov, P; Pakhlova, G; Park, H; Pedlar, T K; Pestotnik, R; Petrič, M; Piilonen, L E; Ritter, M; Rostomyan, A; Sakai, Y; Sandilya, S; Santelj, L; Sanuki, T; Sato, Y; Savinov, V; Schneider, O; Schnell, G; Schwanda, C; Semmler, D; Senyo, K; Sevior, M E; Shebalin, V; Shibata, T-A; Shiu, J-G; Shwartz, B; Sibidanov, A; Simon, F; Sohn, Y-S; Sokolov, A; Solovieva, E; Starič, M; Steder, M; Sumisawa, K; Sumiyoshi, T; Tamponi, U; Tanida, K; Tatishvili, G; Teramoto, Y; Thorne, F; Trabelsi, K; Uchida, M; Uehara, S; Uglov, T; Unno, Y; Uno, S; Urquijo, P; Vahsen, S E; Van Hulse, C; Vanhoefer, P; Varner, G; Vinokurova, A; Vorobyev, V; Wagner, M N; Wang, C H; Wang, M-Z; Wang, P; Wang, X L; Watanabe, M; Watanabe, Y; Wehle, S; Williams, K M; Won, E; Yamaoka, J; Yashchenko, S; Yook, Y; Yusa, Y; Zhang, Z P; Zhilich, V; Zhulanov, V; Zupanc, A

    2014-10-01

    The e(+)e(-) → π(+)π(-)π(0)χ(bJ) (J = 0,1,2) processes are studied using a 118 fb(-1) data sample acquired with the Belle detector at a center-of-mass energy of 10.867 GeV. Unambiguous π(+)π(-)π(0)χ(bJ) (J = 1,2), ωχ(b1) signals are observed, and indication for ωχ(b2) is seen, both for the first time, and the corresponding cross section measurements are presented. No significant π(+)π(-)π(0)χ(b0) or ωχ(b0) signals are observed, and 90% confidence level upper limits on the cross sections for these two processes are obtained. In the π(+)π(-)π(0) invariant mass spectrum, significant non-ω signals are also observed. We search for the X(3872)-like state (named X(b)) decaying into ωϒ(1S); no significant signal is observed with a mass between 10.55 and 10.65 GeV/c(2). PMID:25325633

  11. A theoretical investigation of mixing thermodynamics, age-hardening potential, and electronic structure of ternary M(1)1-x M(2)xB2 alloys with AlB2 type structure.

    PubMed

    Alling, B; Högberg, H; Armiento, R; Rosen, J; Hultman, L

    2015-05-13

    Transition metal diborides are ceramic materials with potential applications as hard protective thin films and electrical contact materials. We investigate the possibility to obtain age hardening through isostructural clustering, including spinodal decomposition, or ordering-induced precipitation in ternary diboride alloys. By means of first-principles mixing thermodynamics calculations, 45 ternary M(1)1-x M(2)xB2 alloys comprising M(i)B2 (M(i) = Mg, Al, Sc, Y, Ti, Zr, Hf, V, Nb, Ta) with AlB2 type structure are studied. In particular Al1-xTixB2 is found to be of interest for coherent isostructural decomposition with a strong driving force for phase separation, while having almost concentration independent a and c lattice parameters. The results are explained by revealing the nature of the electronic structure in these alloys, and in particular, the origin of the pseudogap at EF in TiB2, ZrB2, and HfB2.

  12. Dosimetric Verification around High-density Materials for External Beam Radiotherapy.

    PubMed

    Sasaki, Makoto; Nakata, Manabu; Nakamura, Mitsuhiro; Ishihara, Yoshitomo; Fujimoto, Takahiro; Tsuruta, Yusuke; Yano, Shinsuke; Higashimura, Kyouji

    2016-09-01

    It is generally known that the dose distribution around the high-density materials is not accurate with commercially available radiation treatment planning systems (RTPS). Recently, Acuros XB (AXB) has been clinically available for dose calculation algorithm. The AXB is based on the linear Boltzmann transport equation - the governing equation - that describes the distribution of radiation particles resulting from their interactions with matter. The purpose of this study was to evaluate the dose calculation accuracy around high-density materials for AXB under three X-rays energy on the basis of measured values with EBT3 and compare AXB with various dose calculation algorithms (AAA, XVMC) in RTPS and Monte Carlo. First, two different metals, including titanium and stainless steel, were inserted at the center of a water-equivalent phantom, and the depth dose was measured with EBT3. Next, after a phantom which reproduced the geometry of measurement was virtually created in RTPS, dose distributions were calculated with three commercially available algorithms (AXB, AAA, and XVMC) and MC. The calculated doses were then compared with the measured ones. As a result, compared to other algorithms, it was found that the dose calculation accuracy of AXB at the exit side of high-density materials was comparable to that of MC and measured value with EBT3. However, note that AXB underestimated the dose up to approximately 30% at the plane of incidence because it cannot exactly estimate the impact of the backscatter. PMID:27647596

  13. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  14. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  15. A Review on the Use of Grid-Based Boltzmann Equation Solvers for Dose Calculation in External Photon Beam Treatment Planning

    PubMed Central

    Kan, Monica W. K.; Yu, Peter K. N.; Leung, Lucullus H. T.

    2013-01-01

    Deterministic linear Boltzmann transport equation (D-LBTE) solvers have recently been developed, and one of the latest available software codes, Acuros XB, has been implemented in a commercial treatment planning system for radiotherapy photon beam dose calculation. One of the major limitations of most commercially available model-based algorithms for photon dose calculation is the ability to account for the effect of electron transport. This induces some errors in patient dose calculations, especially near heterogeneous interfaces between low and high density media such as tissue/lung interfaces. D-LBTE solvers have a high potential of producing accurate dose distributions in and near heterogeneous media in the human body. Extensive previous investigations have proved that D-LBTE solvers were able to produce comparable dose calculation accuracy as Monte Carlo methods with a reasonable speed good enough for clinical use. The current paper reviews the dosimetric evaluations of D-LBTE solvers for external beam photon radiotherapy. This content summarizes and discusses dosimetric validations for D-LBTE solvers in both homogeneous and heterogeneous media under different circumstances and also the clinical impact on various diseases due to the conversion of dose calculation from a conventional convolution/superposition algorithm to a recently released D-LBTE solver. PMID:24066294

  16. Dosimetric comparison of a 6-MV flattening-filter and a flattening-filter-free beam for lung stereotactic ablative radiotherapy treatment

    NASA Astrophysics Data System (ADS)

    Kim, Yon-Lae; Chung, Jin-Beom; Kim, Jae-Sung; Lee, Jeong-Woo; Kim, Jin-Young; Kang, Sang-Won; Suh, Tae-Suk

    2015-11-01

    The purpose of this study was to test the feasibility of clinical usage of a flattening-filter-free (FFF) beam for treatment with lung stereotactic ablative radiotherapy (SABR). Ten patients were treated with SABR and a 6-MV FFF beam for this study. All plans using volumetric modulated arc therapy (VMAT) were optimized in the Eclipse treatment planning system (TPS) by using the Acuros XB (AXB) dose calculation algorithm and were delivered by using a Varian TrueBeam ™ linear accelerator equipped with a high-definition (HD) multi-leaf collimator. The prescription dose used was 48 Gy in 4 fractions. In order to compare the plan using a conventional 6-MV flattening-filter (FF) beam, the SABR plan was recalculated under the condition of the same beam settings used in the plan employing the 6-MV FFF beam. All dose distributions were calculated by using Acuros XB (AXB, version 11) and a 2.5-mm isotropic dose grid. The cumulative dosevolume histograms (DVH) for the planning target volume (PTV) and all organs at risk (OARs) were analyzed. Technical parameters, such as total monitor units (MUs) and the delivery time, were also recorded and assessed. All plans for target volumes met the planning objectives for the PTV ( i.e., V95% > 95%) and the maximum dose ( i.e., Dmax < 110%) revealing adequate target coverage for the 6-MV FF and FFF beams. Differences in DVH for target volumes (PTV and clinical target volume (CTV)) and OARs on the lung SABR plans from the interchange of the treatment beams were small, but showed a marked reduction (52.97%) in the treatment delivery time. The SABR plan with a FFF beam required a larger number of MUs than the plan with the FF beam, and the mean difference in MUs was 4.65%. This study demonstrated that the use of the FFF beam for lung SABR plan provided better treatment efficiency relative to 6-MV FF beam. This strategy should be particularly beneficial for high dose conformity to the lung and decreased intra-fraction movements because of

  17. SU-D-BRB-07: Lipiodol Impact On Dose Distribution in Liver SBRT After TACE

    SciTech Connect

    Kawahara, D; Ozawa, S; Hioki, K; Suzuki, T; Lin, Y; Okumura, T; Ochi, Y; Nakashima, T; Ohno, Y; Kimura, T; Murakami, Y; Nagata, Y

    2015-06-15

    Purpose: Stereotactic body radiotherapy (SBRT) combining transarterial chemoembolization (TACE) with Lipiodol is expected to improve local control. This study aims to evaluate the impact of Lipiodol on dose distribution by comparing the dosimetric performance of the Acuros XB (AXB) algorithm, anisotropic analytical algorithm (AAA), and Monte Carlo (MC) method using a virtual heterogeneous phantom and a treatment plan for liver SBRT after TACE. Methods: The dose distributions calculated using AAA and AXB algorithm, both in Eclipse (ver. 11; Varian Medical Systems, Palo Alto, CA), and EGSnrc-MC were compared. First, the inhomogeneity correction accuracy of the AXB algorithm and AAA was evaluated by comparing the percent depth dose (PDD) obtained from the algorithms with that from the MC calculations using a virtual inhomogeneity phantom, which included water and Lipiodol. Second, the dose distribution of a liver SBRT patient treatment plan was compared between the calculation algorithms. Results In the virtual phantom, compared with the MC calculations, AAA underestimated the doses just before and in the Lipiodol region by 5.1% and 9.5%, respectively, and overestimated the doses behind the region by 6.0%. Furthermore, compared with the MC calculations, the AXB algorithm underestimated the doses just before and in the Lipiodol region by 4.5% and 10.5%, respectively, and overestimated the doses behind the region by 4.2%. In the SBRT plan, the AAA and AXB algorithm underestimated the maximum doses in the Lipiodol region by 9.0% in comparison with the MC calculations. In clinical cases, the dose enhancement in the Lipiodol region can approximately 10% increases in tumor dose without increase of dose to normal tissue. Conclusion: The MC method demonstrated a larger increase in the dose in the Lipiodol region than the AAA and AXB algorithm. Notably, dose enhancement were observed in the tumor area; this may lead to a clinical benefit.

  18. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  19. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  20. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  1. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  2. Comparison of Dosimetric Performance among Commercial Quality Assurance Systems for Verifying Pretreatment Plans of Stereotactic Body Radiotherapy Using Flattening-Filter-Free Beams

    PubMed Central

    2016-01-01

    The purpose of this study was to compare the performance of different commercial quality assurance (QA) systems for the pretreatment verification plan of stereotactic body radiotherapy (SBRT) with volumetric arc therapy (VMAT) technique using a flattening-filter-free beam. The verification for 20 pretreatment cancer patients (seven lung, six spine, and seven prostate cancers) were tested using three QA systems (EBT3 film, I’mRT MatriXX array, and MapCHECK). All the SBRT-VMAT plans were optimized in the Eclipse (version 11.0.34) treatment planning system (TPS) using the Acuros XB dose calculation algorithm and were delivered to the Varian TrueBeam® accelerator equipped with a high-definition multileaf collimator. Gamma agreement evaluation was analyzed with the criteria of 2% dose difference and 2 mm distance to agreement (2%/2 mm) or 3%/3 mm. The highest passing rate (99.1% for 3%/3 mm) was observed on the MapCHECK system while the lowest passing rate was obtained on the film. The pretreatment verification results depend on the QA systems, treatment sites, and delivery beam energies. However, the delivery QA results for all QA systems based on the TPS calculation showed a good agreement of more than 90% for both the criteria. It is concluded that the three 2D QA systems have sufficient potential for pretreatment verification of the SBRT-VMAT plan. PMID:27709851

  3. Semioptimal practicable algorithmic cooling

    NASA Astrophysics Data System (ADS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  4. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  5. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  6. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  7. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  8. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  9. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  10. Network-Control Algorithm

    NASA Technical Reports Server (NTRS)

    Chan, Hak-Wai; Yan, Tsun-Yee

    1989-01-01

    Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.

  11. New stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo

    1999-05-01

    This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.

  12. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  13. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  14. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  15. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  16. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  17. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  18. Optical rate sensor algorithms

    NASA Astrophysics Data System (ADS)

    Uhde-Lacovara, Jo A.

    1989-12-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  19. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  20. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  1. Synthesis, crystal structure investigation and magnetism of the complex metal-rich boride series Crx(Rh1-yRuy)7-xB3 (x=0.88-1; y=0-1) with Th7Fe3-type structure

    NASA Astrophysics Data System (ADS)

    Misse, Patrick R. N.; Mbarki, Mohammed; Fokwa, Boniface P. T.

    2012-08-01

    Powder samples and single crystals of the new complex boride series Crx(Rh1-yRuy)7-xB3 (x=0.88-1; y=0-1) have been synthesized by arc-melting the elements under purified argon atmosphere on a water-cooled copper crucible. The products, which have metallic luster, were structurally characterized by single-crystal and powder X-ray diffraction as well as EDX measurements. Within the whole solid solution range the hexagonal Th7Fe3 structure type (space group P63mc, no. 186, Z=2) was identified. Single-crystal structure refinement results indicate the presence of chromium at two sites (6c and 2b) of the available three metal Wyckoff sites, with a pronounced preference for the 6c site. An unexpected Rh/Ru site preference was found in the Ru-rich region only, leading to two different magnetic behaviors in the solid solution: The Rh-rich region shows a temperature-independent (Pauli) paramagnetism whereas an additional temperature-dependent paramagnetic component is found in the Ru-rich region.

  2. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  3. Contact solution algorithms

    NASA Technical Reports Server (NTRS)

    Tielking, John T.

    1989-01-01

    Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.

  4. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  5. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  6. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  7. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  8. Robotic Follow Algorithm

    SciTech Connect

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  9. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  10. The Lure of Algorithms

    ERIC Educational Resources Information Center

    Drake, Michael

    2011-01-01

    One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…

  11. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  12. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  13. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  14. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  15. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  16. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  17. Algorithms, games, and evolution.

    PubMed

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-07-22

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: "What algorithm could possibly achieve all this in a mere three and a half billion years?" In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution.

  18. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  19. Adaptive continuous twisting algorithm

    NASA Astrophysics Data System (ADS)

    Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid

    2016-09-01

    In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.

  20. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  1. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  2. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  3. The Loop Algorithm

    NASA Astrophysics Data System (ADS)

    Evertz, Hans Gerd

    1998-03-01

    Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.

  4. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  5. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  6. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  7. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  8. SU-E-T-280: Reconstructed Rectal Wall Dose Map-Based Verification of Rectal Dose Sparing Effect According to Rectum Definition Methods and Dose Perturbation by Air Cavity in Endo-Rectal Balloon

    SciTech Connect

    Park, J; Park, H; Lee, J; Kang, S; Lee, M; Suh, T; Lee, B

    2014-06-01

    Purpose: Dosimetric effect and discrepancy according to the rectum definition methods and dose perturbation by air cavity in an endo-rectal balloon (ERB) were verified using rectal-wall (Rwall) dose maps considering systematic errors in dose optimization and calculation accuracy in intensity-modulated radiation treatment (IMRT) for prostate cancer patients. Methods: When the inflated ERB having average diameter of 4.5 cm and air volume of 100 cc is used for patient, Rwall doses were predicted by pencil-beam convolution (PBC), anisotropic analytic algorithm (AAA), and AcurosXB (AXB) with material assignment function. The errors of dose optimization and calculation by separating air cavity from the whole rectum (Rwhole) were verified with measured rectal doses. The Rwall doses affected by the dose perturbation of air cavity were evaluated using a featured rectal phantom allowing insert of rolled-up gafchromic films and glass rod detectors placed along the rectum perimeter. Inner and outer Rwall doses were verified with reconstructed predicted rectal wall dose maps. Dose errors and extent at dose levels were evaluated with estimated rectal toxicity. Results: While AXB showed insignificant difference of target dose coverage, Rwall doses underestimated by up to 20% in dose optimization for the Rwhole than Rwall at all dose range except for the maximum dose. As dose optimization for Rwall was applied, the Rwall doses presented dose error less than 3% between dose calculation algorithm except for overestimation of maximum rectal dose up to 5% in PBC. Dose optimization for Rwhole caused dose difference of Rwall especially at intermediate doses. Conclusion: Dose optimization for Rwall could be suggested for more accurate prediction of rectal wall dose prediction and dose perturbation effect by air cavity in IMRT for prostate cancer. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea

  9. SU-E-J-58: Dosimetric Verification of Metal Artifact Effects: Comparison of Dose Distributions Affected by Patient Teeth and Implants

    SciTech Connect

    Lee, M; Kang, S; Lee, S; Suh, T; Lee, J; Park, J; Park, H; Lee, B

    2014-06-01

    Purpose: Implant-supported dentures seem particularly appropriate for the predicament of becoming edentulous and cancer patients are no exceptions. As the number of people having dental implants increased in different ages, critical dosimetric verification of metal artifact effects are required for the more accurate head and neck radiation therapy. The purpose of this study is to verify the theoretical analysis of the metal(streak and dark) artifact, and to evaluate dosimetric effect which cause by dental implants in CT images of patients with the patient teeth and implants inserted humanoid phantom. Methods: The phantom comprises cylinder which is shaped to simulate the anatomical structures of a human head and neck. Through applying various clinical cases, made phantom which is closely allied to human. Developed phantom can verify two classes: (i)closed mouth (ii)opened mouth. RapidArc plans of 4 cases were created in the Eclipse planning system. Total dose of 2000 cGy in 10 fractions is prescribed to the whole planning target volume (PTV) using 6MV photon beams. Acuros XB (AXB) advanced dose calculation algorithm, Analytical Anisotropic Algorithm (AAA) and progressive resolution optimizer were used in dose optimization and calculation. Results: In closed and opened mouth phantom, because dark artifacts formed extensively around the metal implants, dose variation was relatively higher than that of streak artifacts. As the PTV was delineated on the dark regions or large streak artifact regions, maximum 7.8% dose error and average 3.2% difference was observed. The averaged minimum dose to the PTV predicted by AAA was about 5.6% higher and OARs doses are also 5.2% higher compared to AXB. Conclusion: The results of this study showed that AXB dose calculation involving high-density materials is more accurate than AAA calculation, and AXB was superior to AAA in dose predictions beyond dark artifact/air cavity portion when compared against the measurements.

  10. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  11. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  12. Two Meanings of Algorithmic Mathematics.

    ERIC Educational Resources Information Center

    Maurer, Stephen B.

    1984-01-01

    Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…

  13. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  14. The clinical algorithm nosology: a method for comparing algorithmic guidelines.

    PubMed

    Pearson, S D; Margolis, C Z; Davis, S; Schreier, L K; Gottlieb, L K

    1992-01-01

    Concern regarding the cost and quality of medical care has led to a proliferation of competing clinical practice guidelines. No technique has been described for determining objectively the degree of similarity between alternative guidelines for the same clinical problem. The authors describe the development of the Clinical Algorithm Nosology (CAN), a new method to compare one form of guideline: the clinical algorithm. The CAN measures overall design complexity independent of algorithm content, qualitatively describes the clinical differences between two alternative algorithms, and then scores the degree of similarity between them. CAN algorithm design-complexity scores correlated highly with clinicians' estimates of complexity on an ordinal scale (r = 0.86). Five pairs of clinical algorithms addressing three topics (gallstone lithotripsy, thyroid nodule, and sinusitis) were selected for interrater reliability testing of the CAN clinical-similarity scoring system. Raters categorized the similarity of algorithm pathways in alternative algorithms as "identical," "similar," or "different." Interrater agreement was achieved on 85/109 scores (80%), weighted kappa statistic, k = 0.73. It is concluded that the CAN is a valid method for determining the structural complexity of clinical algorithms, and a reliable method for describing differences and scoring the similarity between algorithms for the same clinical problem. In the future, the CAN may serve to evaluate the reliability of algorithm development programs, and to support providers and purchasers in choosing among alternative clinical guidelines.

  15. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  16. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  17. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  18. MLP iterative construction algorithm

    NASA Astrophysics Data System (ADS)

    Rathbun, Thomas F.; Rogers, Steven K.; DeSimio, Martin P.; Oxley, Mark E.

    1997-04-01

    The MLP Iterative Construction Algorithm (MICA) designs a Multi-Layer Perceptron (MLP) neural network as it trains. MICA adds Hidden Layer Nodes one at a time, separating classes on a pair-wise basis, until the data is projected into a linear separable space by class. Then MICA trains the Output Layer Nodes, which results in an MLP that achieves 100% accuracy on the training data. MICA, like Backprop, produces an MLP that is a minimum mean squared error approximation of the Bayes optimal discriminant function. Moreover, MICA's training technique yields novel feature selection technique and hidden node pruning technique

  19. Varian HDR surface applicators - commissioning and clinical implementation.

    PubMed

    Iftimia, Ileana; McKee, Andrea B; Halvorsen, Per H

    2016-01-01

    The purpose of this study was to validate the dosimetric performance of Varian surface applicators with the source vertically positioned and develop procedures for clinical implementation. The Varian surface applicators with the source vertically positioned provide a wide range of apertures making them clinically advantageous, though the steep dose gradient in the region of 3-4 mm prescription depth presents multiple challenges. The following commissioning tests were performed: 1) verification of functional integrity and physical dimensions; and 2) dosimetric measurements to validate data provided by Varian as well as data obtained using the Acuros algorithm for heterogeneity corrected dose calculation. A solid water (SW) phantom was scanned and the Acuros algorithm was used to compute the dose at 5 mm depth and at surface for all applicators. Two sets of reference dose measurements were performed, with the source positioned at (i) -10 mm and (ii) -15 mm from the center of the first nominal dwell position. Measurements were taken at 5 mm depth in a SW phantom and in air at the applicator surface. The results were then compared to the vendor's data and to the Acuros calculated dose. Relative dose measurements using Gafchromic films were taken at a depth of 4 mm in SW. Percent depth ionization (PDI) measurements using ion chamber were performed in SW. The profiles generated from film measurements and the PDI plots were compared with those computed using the Acuros algorithm and vendor's data, when available. Preliminary leakage tests were performed using optically stimulated luminescence dosimeters (OSLDs) and the results were compared with Acuros predictions. All applicators were found to be functional with physical dimensions within 1 mm of specifications. For scenario (ii) measurements taken in SW at 5 mm depth and in air at the surface of each applicator were within 10% and 4% agreement with vendor's data, respectively. Compared with Acuros predictions, these

  20. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  1. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  2. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  3. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  4. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  5. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  6. Audio detection algorithms

    NASA Astrophysics Data System (ADS)

    Neta, B.; Mansager, B.

    1992-08-01

    Audio information concerning targets generally includes direction, frequencies, and energy levels. One use of audio cueing is to use direction information to help determine where more sensitive visual direction and acquisition sensors should be directed. Generally, use of audio cueing will shorten times required for visual detection, although there could be circumstances where the audio information is misleading and degrades visual performance. Audio signatures can also be useful for helping classify the emanating platform, as well as to provide estimates of its velocity. The Janus combat simulation is the premier high resolution model used by the Army and other agencies to conduct research. This model has a visual detection model which essentially incorporates algorithms as described by Hartman(1985). The model in its current form does not have any sound cueing capability. This report is part of a research effort to investigate the utility of developing such a capability.

  7. Fighting Censorship with Algorithms

    NASA Astrophysics Data System (ADS)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  8. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  9. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  10. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  11. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  12. Algorithm Engineering - An Attempt at a Definition

    NASA Astrophysics Data System (ADS)

    Sanders, Peter

    This paper defines algorithm engineering as a general methodology for algorithmic research. The main process in this methodology is a cycle consisting of algorithm design, analysis, implementation and experimental evaluation that resembles Popper’s scientific method. Important additional issues are realistic models, algorithm libraries, benchmarks with real-world problem instances, and a strong coupling to applications. Algorithm theory with its process of subsequent modelling, design, and analysis is not a competing approach to algorithmics but an important ingredient of algorithm engineering.

  13. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  14. Interpolation algorithms for machine tools

    SciTech Connect

    Burleson, R.R.

    1981-08-01

    There are three types of interpolation algorithms presently used in most numerical control systems: digital differential analyzer, pulse-rate multiplier, and binary-rate multiplier. A method for higher order interpolation is in the experimental stages. The trends point toward the use of high-speed micrprocessors to perform these interpolation algorithms.

  15. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  16. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  17. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  18. Panniculitides, an algorithmic approach.

    PubMed

    Zelger, B

    2013-08-01

    The issue of inflammatory diseases of subcutis and its mimicries is generally considered a difficult field of dermatopathology. Yet, in my experience, with appropriate biopsies and good clinicopathological correlation, a specific diagnosis of panniculitides can usually be made. Thereby, knowledge about some basic anatomic and pathological issues is essential. Anatomy differentiates within the panniculus between the fatty lobules separated by fibrous septa. Pathologically, inflammation of panniculus is defined and recognized by an inflammatory process which leads to tissue damage and necrosis. Several types of fat necrosis are observed: xanthomatized macrophages in lipophagic necrosis; granular fat necrosis and fat micropseudocysts in liquefactive fat necrosis; mummified adipocytes in "hyalinizing" fat necrosis with/without saponification and/or calcification; and lipomembranous membranes in membranous fat necrosis. In an algorithmic approach the recognition of an inflammatory process recognized by features as elaborated above is best followed in three steps: recognition of pattern, second of subpattern, and finally of presence and composition of inflammatory cells. Pattern differentiates a mostly septal or mostly lobular distribution at scanning magnification. In the subpattern category one looks for the presence or absence of vasculitis, and, if this is the case, the size and the nature of the involved blood vessel: arterioles and small arteries or veins; capillaries or postcapillary venules. The third step will be to identify the nature of the cells present in the inflammatory infiltrate and, finally, to look for additional histopathologic features that allow for a specific final diagnosis in the language of clinical dermatology of disease involving the subcutaneous fat.

  19. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  20. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  1. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  2. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  3. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  4. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  5. Linearization algorithms for line transfer

    SciTech Connect

    Scott, H.A.

    1990-11-06

    Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.

  6. Review of jet reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Atkin, Ryan

    2015-10-01

    Accurate jet reconstruction is necessary for understanding the link between the unobserved partons and the jets of observed collimated colourless particles the partons hadronise into. Understanding this link sheds light on the properties of these partons. A review of various common jet algorithms is presented, namely the Kt, Anti-Kt, Cambridge/Aachen, Iterative cones and the SIScone, highlighting their strengths and weaknesses. If one is interested in studying jets, the Anti-Kt algorithm is the best choice, however if ones interest is in the jet substructures then the Cambridge/Aachen algorithm would be the best option.

  7. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  8. A universal symmetry detection algorithm.

    PubMed

    Maurer, Peter M

    2015-01-01

    Research on symmetry detection focuses on identifying and detecting new types of symmetry. The paper presents an algorithm that is capable of detecting any type of permutation-based symmetry, including many types for which there are no existing algorithms. General symmetry detection is library-based, but symmetries that can be parameterized, (i.e. total, partial, rotational, and dihedral symmetry), can be detected without using libraries. In many cases it is faster than existing techniques. Furthermore, it is simpler than most existing techniques, and can easily be incorporated into existing software. The algorithm can also be used with virtually any type of matrix-based symmetry, including conjugate symmetry.

  9. Multiprojection algorithms with generalized projections

    SciTech Connect

    Censor, J.; Elfving, T.

    1994-12-31

    Generalized distances give raise to generalized projections onto convex sets. An important question is whether or not one can use, within the same projection algorithm, different types of such generalized projections. This question has practical consequences in the areas of signal detection and image recovery, in situations that can be formulated mathematically as convex feasibility problems. We show here that a simultaneous multiprojection algorithmic scheme converges. Different specific multiprojection algorithms can be derived from our scheme by a judicious choice of the Bregman functions which govern the process. As a by-product of the investigation we also obtain block-iterative schemes for certain kinds of linearly constrained optimization problems.

  10. Dynamic Programming Algorithm vs. Genetic Algorithm: Which is Faster?

    NASA Astrophysics Data System (ADS)

    Petković, Dušan

    The article compares two different approaches for the optimization problem of large join queries (LJQs). Almost all commercial database systems use a form of the dynamic programming algorithm to solve the ordering of join operations for large join queries, i.e. joins with more than dozen join operations. The property of the dynamic programming algorithm is that the execution time increases significantly in the case, where the number of join operations in a query is large. Genetic algorithms (GAs), as a data mining technique, have been shown as a promising technique in solving the ordering of join operations in LJQs. Using the existing implementation of GA, we compare the dynamic programming algorithm implemented in commercial database systems with the corresponding GA module. Our results show that the use of a genetic algorithm is a better solution for optimization of large join queries, i.e., that such a technique outperforms the implementations of the dynamic programming algorithm in conventional query optimization components for very large join queries.

  11. 77 FR 70147 - Fish and Wildlife Service 0648-XB088

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-23

    ... following locations: 1. Social Sciences Resource Center, Green Library, Room 121, Stanford, CA 94305. 2. Palo Alto Main Library, 1213 Newell Road, Palo Alto, CA 94303. Individuals wishing to obtain copies of... categories of activities: Water management; creek maintenance; academic activities; utility installation...

  12. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  13. Multikernel least mean square algorithm.

    PubMed

    Tobar, Felipe A; Kung, Sun-Yuan; Mandic, Danilo P

    2014-02-01

    The multikernel least-mean-square algorithm is introduced for adaptive estimation of vector-valued nonlinear and nonstationary signals. This is achieved by mapping the multivariate input data to a Hilbert space of time-varying vector-valued functions, whose inner products (kernels) are combined in an online fashion. The proposed algorithm is equipped with novel adaptive sparsification criteria ensuring a finite dictionary, and is computationally efficient and suitable for nonstationary environments. We also show the ability of the proposed vector-valued reproducing kernel Hilbert space to serve as a feature space for the class of multikernel least-squares algorithms. The benefits of adaptive multikernel (MK) estimation algorithms are illuminated in the nonlinear multivariate adaptive prediction setting. Simulations on nonlinear inertial body sensor signals and nonstationary real-world wind signals of low, medium, and high dynamic regimes support the approach. PMID:24807027

  14. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  15. Fibonacci Numbers and Computer Algorithms.

    ERIC Educational Resources Information Center

    Atkins, John; Geist, Robert

    1987-01-01

    The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)

  16. The Origins of Counting Algorithms

    PubMed Central

    Cantlon, Jessica F.; Piantadosi, Steven T.; Ferrigno, Stephen; Hughes, Kelly D.; Barnard, Allison M.

    2015-01-01

    Humans’ ability to ‘count’ by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that non-human primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. Monkeys saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a time. At the point when the second set approximately outnumbered the first set, monkeys spontaneously moved to choose the second set even before it was completely baited. Using a novel Bayesian analysis, we show that monkeys used an approximate counting algorithm to increment and compare quantities in sequence. This algorithm is structurally similar to formal counting in humans and thus may have been an important evolutionary precursor to human counting. PMID:25953949

  17. What is a Systolic Algorithm?

    NASA Astrophysics Data System (ADS)

    Rao, Sailesh K.; Kollath, T.

    1986-07-01

    In this paper, we show that every systolic array executes a Regular Iterative Algorithm with a strongly separating hyperplane and conversely, that every such algorithm can be implemented on a systolic array. This characterization provides us with an unified framework for describing the contributions of other authors. It also exposes the relevance of many fundamental concepts that were introduced in the sixties by Hennie, Waite and Karp, Miller and Winograd, to the present day concern of systolic array

  18. Genetic algorithms as discovery programs

    SciTech Connect

    Hilliard, M.R.; Liepins, G.

    1986-01-01

    Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.

  19. An Efficient Pattern Matching Algorithm

    NASA Astrophysics Data System (ADS)

    Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.

    In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.

  20. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  1. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  2. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  3. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  4. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  5. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  6. GPU Accelerated Event Detection Algorithm

    SciTech Connect

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.

  7. Adaptive Routing Algorithm in Wireless Communication Networks Using Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Wu, Qinghua; Cai, Zhihua

    At present, mobile communications traffic routing designs are complicated because there are more systems inter-connecting to one another. For example, Mobile Communication in the wireless communication networks has two routing design conditions to consider, i.e. the circuit switching and the packet switching. The problem in the Packet Switching routing design is its use of high-speed transmission link and its dynamic routing nature. In this paper, Evolutionary Algorithms is used to determine the best solution and the shortest communication paths. We developed a Genetic Optimization Process that can help network planners solving the best solutions or the best paths of routing table in wireless communication networks are easily and quickly. From the experiment results can be noted that the evolutionary algorithm not only gets good solutions, but also a more predictable running time when compared to sequential genetic algorithm.

  8. A replica exchange Monte Carlo algorithm for protein folding in the HP model

    PubMed Central

    Thachuk, Chris; Shmygelska, Alena; Hoos, Holger H

    2007-01-01

    Background The ab initio protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing an energy function; it is one of the most important and challenging problems in biochemistry, molecular biology and biophysics. The ab initio protein folding problem is computationally challenging and has been shown to be NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqqGqbauaaa@3961@-hard even when conformations are restricted to a lattice. In this work, we implement and evaluate the replica exchange Monte Carlo (REMC) method, which has already been applied very successfully to more complex protein models and other optimization problems with complex energy landscapes, in combination with the highly effective pull move neighbourhood in two widely studied Hydrophobic Polar (HP) lattice models. Results We demonstrate that REMC is highly effective for solving instances of the square (2D) and cubic (3D) HP protein folding problem. When using the pull move neighbourhood, REMC outperforms current state-of-the-art algorithms for most benchmark instances. Additionally, we show that this new algorithm provides a larger ensemble of ground-state structures than the existing state-of-the-art methods. Furthermore, it scales well with sequence length, and it finds significantly better conformations on long biological sequences and sequences with a provably unique ground-state structure, which is believed to be a characteristic of real proteins. We also present evidence that our REMC algorithm can fold sequences which exhibit significant interaction between termini in the hydrophobic core relatively easily. Conclusion We demonstrate that REMC utilizing the pull move neighbourhood

  9. Algorithms, complexity, and the sciences.

    PubMed

    Papadimitriou, Christos

    2014-11-11

    Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.

  10. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms.

  11. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  12. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  13. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  14. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  15. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  16. Seamless Merging of Hypertext and Algorithm Animation

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  17. Firefly Algorithm for Structural Search.

    PubMed

    Avendaño-Franco, Guillermo; Romero, Aldo H

    2016-07-12

    The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers. PMID:27232694

  18. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  19. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  20. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  1. An Efficient Reachability Analysis Algorithm

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2008-01-01

    A document discusses a new algorithm for generating higher-order dependencies for diagnostic and sensor placement analysis when a system is described with a causal modeling framework. This innovation will be used in diagnostic and sensor optimization and analysis tools. Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in-situ platforms. This algorithm will serve as a power tool for technologies that satisfy a key requirement of autonomous spacecraft, including science instruments and in-situ missions.

  2. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  3. A swaying object detection algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Shidong; Rong, Jianzhong; Zhou, Dechuang; Wang, Jian

    2013-07-01

    Moving object detection is a most important preliminary step in video analysis. Some moving objects such as spitting steam, fire and smoke have unique motion feature whose lower position keep basically unchanged and the upper position move back and forth. Based on this unique motion feature, a swaying object detection algorithm is presented in this paper. Firstly, fuzzy integral was adopted to integrate color features for extracting moving objects from video frames. Secondly, a swaying identification algorithm based on centroid calculation was used to distinguish the swaying object from other moving objects. Experiments show that the proposed method is effective to detect swaying object.

  4. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  5. Born approximation, scattering, and algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun

    2015-05-01

    In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.

  6. SU-F-BRD-15: The Impact of Dose Calculation Algorithm and Hounsfield Units Conversion Tables On Plan Dosimetry for Lung SBRT

    SciTech Connect

    Kuo, L; Yorke, E; Lim, S; Mechalakos, J; Rimner, A

    2014-06-15

    Purpose: To assess dosimetric differences in IMRT lung stereotactic body radiotherapy (SBRT) plans calculated with Varian AAA and Acuros (AXB) and with vendor-supplied (V) versus in-house (IH) measured Hounsfield units (HU) to mass and HU to electron density conversion tables. Methods: In-house conversion tables were measured using Gammex 472 density-plug phantom. IMRT plans (6 MV, Varian TrueBeam, 6–9 coplanar fields) meeting departmental coverage and normal tissue constraints were retrospectively generated for 10 lung SBRT cases using Eclipse Vn 10.0.28 AAA with in-house tables (AAA/IH). Using these monitor units and MLC sequences, plans were recalculated with AAA and vendor tables (AAA/V) and with AXB with both tables (AXB/IH and AXB/V). Ratios to corresponding AAA/IH values were calculated for PTV D95, D01, D99, mean-dose, total and ipsilateral lung V20 and chestwall V30. Statistical significance of differences was judged by Wilcoxon Signed Rank Test (p<0.05). Results: For HU<−400 the vendor HU-mass density table was notably below the IH table. PTV D95 ratios to AAA/IH, averaged over all patients, are 0.963±0.073 (p=0.508), 0.914±0.126 (p=0.011), and 0.998±0.001 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. Total lung V20 ratios are 1.006±0.046 (p=0.386), 0.975±0.080 (p=0.514) and 0.998±0.002 (p=0.007); ipsilateral lung V20 ratios are 1.008±0.041(p=0.284), 0.977±0.076 (p=0.443), and 0.998±0.018 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. In 7 cases, ratios to AAA/IH were within ± 5% for all indices studied. For 3 cases characterized by very low lung density and small PTV (19.99±8.09 c.c.), PTV D95 ratio for AXB/V ranged from 67.4% to 85.9%, AXB/IH D95 ratio ranged from 81.6% to 93.4%; there were large differences in other studied indices. Conclusion: For AXB users, careful attention to HU conversion tables is important, as they can significantly impact AXB (but not AAA) lung SBRT plans. Algorithm selection is also important for

  7. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  8. Blind Alley Aware ACO Routing Algorithm

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Masaya; Otani, Kazuo

    2010-10-01

    The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.

  9. Two Algorithms for Processing Electronic Nose Data

    NASA Technical Reports Server (NTRS)

    Young, Rebecca; Linnell, Bruce

    2007-01-01

    Two algorithms for processing the digitized readings of electronic noses, and computer programs to implement the algorithms, have been devised in a continuing effort to increase the utility of electronic noses as means of identifying airborne compounds and measuring their concentrations. One algorithm identifies the two vapors in a two-vapor mixture and estimates the concentration of each vapor (in principle, this algorithm could be extended to more than two vapors). The other algorithm identifies a single vapor and estimates its concentration.

  10. Formalization of algorithms for relational database machines

    SciTech Connect

    Ryvkin, V.M.; Komarov, P.I.; Nazarov, A.S.

    1986-11-01

    This paper applies the apparatus of algorithmic algebras to formalize the mapping of the relational algebra language into the internal database processor language. The apparatus is a popular tool for formal structured description of parallel algorithms. The MUL'TIPROTSESSIST automatic parallel program design system using systems of algorithmic algebras may be applied to automate the design of database machine operating algorithms in experimental research and to formalize the parallel organization of interpretation algorithms for the relational algebraic operations.

  11. Quartic Rotation Criteria and Algorithms.

    ERIC Educational Resources Information Center

    Clarkson, Douglas B.; Jennrich, Robert I.

    1988-01-01

    Most of the current analytic rotation criteria for simple structure in factor analysis are summarized and identified as members of a general symmetric family of quartic criteria. A unified development of algorithms for orthogonal and direct oblique rotation using arbitrary criteria from this family is presented. (Author/TJH)

  12. Adaptive protection algorithm and system

    DOEpatents

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  13. Algorithms, complexity, and the sciences

    PubMed Central

    Papadimitriou, Christos

    2014-01-01

    Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  14. Associative Algorithms for Computational Creativity

    ERIC Educational Resources Information Center

    Varshney, Lav R.; Wang, Jun; Varshney, Kush R.

    2016-01-01

    Computational creativity, the generation of new, unimagined ideas or artifacts by a machine that are deemed creative by people, can be applied in the culinary domain to create novel and flavorful dishes. In fact, we have done so successfully using a combinatorial algorithm for recipe generation combined with statistical models for recipe ranking…

  15. Coagulation algorithms with size binning

    NASA Technical Reports Server (NTRS)

    Statton, David M.; Gans, Jason; Williams, Eric

    1994-01-01

    The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.

  16. Key Concepts in Informatics: Algorithm

    ERIC Educational Resources Information Center

    Szlávi, Péter; Zsakó, László

    2014-01-01

    "The system of key concepts contains the most important key concepts related to the development tasks of knowledge areas and their vertical hierarchy as well as the links of basic key concepts of different knowledge areas." (Vass 2011) One of the most important of these concepts is the algorithm. In everyday life, when learning or…

  17. Document Organization Using Kohonen's Algorithm.

    ERIC Educational Resources Information Center

    Guerrero Bote, Vicente P.; Moya Anegon, Felix de; Herrero Solana, Victor

    2002-01-01

    Discussion of the classification of documents from bibliographic databases focuses on a method of vectorizing reference documents from LISA (Library and Information Science Abstracts) which permits their topological organization using Kohonen's algorithm. Analyzes possibilities of this type of neural network with respect to the development of…

  18. The origins of counting algorithms.

    PubMed

    Cantlon, Jessica F; Piantadosi, Steven T; Ferrigno, Stephen; Hughes, Kelly D; Barnard, Allison M

    2015-06-01

    Humans' ability to count by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that nonhuman primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. First, they saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a time. At the point when the second set was approximately equal to the first set, the monkeys spontaneously moved to choose the second set even before that cache was completely baited. Using a novel Bayesian analysis, we show that the monkeys used an approximate counting algorithm for comparing quantities in sequence that is incremental, iterative, and condition controlled. This proto-counting algorithm is structurally similar to formal counting in humans and thus may have been an important evolutionary precursor to human counting. PMID:25953949

  19. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  20. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  1. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  2. Hyperspectral image compressive projection algorithm

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Allen, David W.

    2009-05-01

    We describe a compressive projection algorithm and experimentally assess its performance when used with a Hyperspectral Image Projector (HIP). The HIP is being developed by NIST for system-level performance testing of hyperspectral and multispectral imagers. It projects a two-dimensional image into the unit under test (UUT), whereby each pixel can have an independently programmable arbitrary spectrum. To efficiently project a single frame of dynamic realistic hyperspectral imagery through the collimator into the UUT, a compression algorithm has been developed whereby the series of abundance images and corresponding endmember spectra that comprise the image cube of that frame are first computed using an automated endmember-finding algorithm such as the Sequential Maximum Angle Convex Cone (SMACC) endmember model. Then these endmember spectra are projected sequentially on the HIP spectral engine in sync with the projection of the abundance images on the HIP spatial engine, during the singleframe exposure time of the UUT. The integrated spatial image captured by the UUT is the endmember-weighted sum of the abundance images, which results in the formation of a datacube for that frame. Compressive projection enables a much smaller set of broadband spectra to be projected than monochromatic projection, and thus utilizes the inherent multiplex advantage of the HIP spectral engine. As a result, radiometric brightness and projection frame rate are enhanced. In this paper, we use a visible breadboard HIP to experimentally assess the compressive projection algorithm performance.

  3. An Algorithm for Suffix Stripping

    ERIC Educational Resources Information Center

    Porter, M. F.

    2006-01-01

    Purpose: The automatic removal of suffixes from words in English is of particular interest in the field of information retrieval. This work was originally published in Program in 1980 and is republished as part of a series of articles commemorating the 40th anniversary of the journal. Design/methodology/approach: An algorithm for suffix stripping…

  4. Randomized approximate nearest neighbors algorithm.

    PubMed

    Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir

    2011-09-20

    We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.

  5. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  6. Some Practical Payments Clearance Algorithms

    NASA Astrophysics Data System (ADS)

    Kumlander, Deniss

    The globalisation of corporations' operations has produced a huge volume of inter-company invoices. Optimisation of those known as payment clearance can produce a significant saving in costs associated with those transfers and handling. The paper revises some common and so practical approaches to the payment clearance problem and proposes some novel algorithms based on graphs theory and heuristic totals' distribution.

  7. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  8. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  9. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  10. Higher-order force gradient symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  11. Systolic algorithms and their implementation

    SciTech Connect

    Kung, H.T.

    1984-01-01

    Very high performance computer systems must rely heavily on parallelism since there are severe physical and technological limits on the ultimate speed of any single processor. The systolic array concept developed in the last several years allows effective use of a very large number of processors in parallel. This article illustrates the basic ideas by reviewing a systolic array design for matrix triangularization and describing its use in the on-the-fly updating of Cholesky decomposition of covariance matrices-a crucial computation in adaptive signal processing. Following this are discussions on issues related to the hardware implementation of systolic algorithms in general, and some guidelines for designing systolic algorithms that will be convenient for implementation. 33 references.

  12. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  13. An NOy* Algorithm for SOLVE

    NASA Technical Reports Server (NTRS)

    Loewenstein, M.; Greenblatt. B. J.; Jost, H.; Podolske, J. R.; Elkins, Jim; Hurst, Dale; Romanashkin, Pavel; Atlas, Elliott; Schauffler, Sue; Donnelly, Steve; Condon, Estelle (Technical Monitor)

    2000-01-01

    De-nitrification and excess re-nitrification was widely observed by ER-2 instruments in the Arctic vortex during SOLVE in winter/spring 2000. Analyses of these events requires a knowledge of the initial or pre-vortex state of the sampled air masses. The canonical relationship of NOy to the long-lived tracer N2O observed in the unperturbed stratosphere is generally used for this purpose. In this paper we will attempt to establish the current unperturbed NOy:N2O relationship (NOy* algorithm) using the ensemble of extra-vortex data from in situ instruments flying on the ER-2 and DC-8, and from the Mark IV remote measurements on the OMS balloon. Initial analysis indicates a change in the SOLVE NOy* from the values predicted by the 1994 Northern Hemisphere NOy* algorithm which was derived from the observations in the ASHOE/MAESA campaign.

  14. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  15. Authenticated algorithms for Byzantine agreement

    SciTech Connect

    Dolev, D.; Strong, H.R.

    1983-11-01

    Reaching agreement in a distributed system in the presence of fault processors is a central issue for reliable computer systems. Using an authentication protocol, one can limit the undetected behavior of faulty processors to a simple failure to relay messages to all intended targets. In this paper the authors show that, in spite of such an ability to limit faulty behavior, and no matter what message types or protocols are allowed, reaching (Byzantine) agreement requires at least t+1 phases or rounds of information exchange, where t is an upper bound on the number of faulty processors. They present algorithms for reaching agreement based on authentication that require a total number of messages sent by correctly operating processors that is polynomial in both t and the number of processors, n. The best algorithm uses only t+1 phases and o(nt) messages. 9 references.

  16. Molecular beacon sequence design algorithm.

    PubMed

    Monroe, W Todd; Haselton, Frederick R

    2003-01-01

    A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.

  17. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  18. Systolic systems: algorithms and complexity

    SciTech Connect

    Chang, J.H.

    1986-01-01

    This thesis has two main contributions. The first is the design of efficient systolic algorithms for solving recurrence equations, dynamic programming problems, scheduling problems, as well as new systolic implementation of data structures such as stacks, queues, priority queues, and dictionary machines. The second major contribution is the investigation of the computational power of systolic arrays in comparison to sequential models and other models of parallel computation.

  19. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  20. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  1. Summing It All Up: Pre-1900 Algorithms.

    ERIC Educational Resources Information Center

    Pearson, Eleanor S.

    1986-01-01

    Computational algorithms from American textbooks copyrighted prior to 1900 are presented--some that convey the concept, some just for special cases, and some just for fun. Algorithms for each operation with whole numbers are presented and analyzed. (MNS)

  2. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  3. Algorithmic complexity and entanglement of quantum states.

    PubMed

    Mora, Caterina E; Briegel, Hans J

    2005-11-11

    We define the algorithmic complexity of a quantum state relative to a given precision parameter, and give upper bounds for various examples of states. We also establish a connection between the entanglement of a quantum state and its algorithmic complexity.

  4. An algorithm for generating abstract syntax trees

    NASA Technical Reports Server (NTRS)

    Noonan, R. E.

    1985-01-01

    The notion of an abstract syntax is discussed. An algorithm is presented for automatically deriving an abstract syntax directly from a BNF grammar. The implementation of this algorithm and its application to the grammar for Modula are discussed.

  5. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.

  6. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  7. Teaching Multiplication Algorithms from Other Cultures

    ERIC Educational Resources Information Center

    Lin, Cheng-Yao

    2007-01-01

    This article describes a number of multiplication algorithms from different cultures around the world: Hindu, Egyptian, Russian, Japanese, and Chinese. Students can learn these algorithms and better understand the operation and properties of multiplication.

  8. Concurrent algorithms for transient FE analysis

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Nour-Omid, B.

    1989-01-01

    Information on concurrent algorithms for transient finite element analysis is given in viewgraph form. Information is given on concurrent dynamic algorithms, interprocessor communication, the performance of the BAR problem on the 32 Processor Hypercube, computational efficiency and accuracy analysis.

  9. Algorithmic Strategies in Combinatorial Chemistry

    SciTech Connect

    GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN

    2000-08-01

    Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

  10. Algorithms and Requirements for Measuring Network Bandwidth

    SciTech Connect

    Jin, Guojun

    2002-12-08

    This report unveils new algorithms for actively measuring (not estimating) available bandwidths with very low intrusion, computing cross traffic, thus estimating the physical bandwidth, provides mathematical proof that the algorithms are accurate, and addresses conditions, requirements, and limitations for new and existing algorithms for measuring network bandwidths. The paper also discusses a number of important terminologies and issues for network bandwidth measurement, and introduces a fundamental parameter -Maximum Burst Size that is critical for implementing algorithms based on multiple packets.

  11. The performance of asynchronous algorithms on hypercubes

    SciTech Connect

    Womble, D.E.

    1988-12-01

    Many asynchronous algorithms have been developed for parallel computers. Most implementations of asynchronous algorithms, however, have been for shared memory machines. In this paper, we study the implementation and performance of some common asynchronous algorithms on the NCUBE/ten, a 1024 node hypercube. In addition, we summarize existing theoretical work and discuss some classes of algorithms that can be made asynchronous and some that cannot. 16 refs., 3 figs.

  12. TVFMCATS. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, R.K.

    1999-05-01

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor`s hardware.

  13. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, Russell Kevin

    1999-06-03

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.

  14. Algorithmic approach to intelligent robot mobility

    SciTech Connect

    Kauffman, S.

    1983-05-01

    This paper presents Sutherland's algorithm, plus an alternative algorithm, which allows mobile robots to move about intelligently in environments resembling the rooms and hallways in which we move around. The main hardware requirements for a robot to use the algorithms presented are mobility and an ability to sense distances with some type of non-contact scanning device. This article does not discuss the actual robot construction. The emphasis is on heuristics and algorithms. 1 reference.

  15. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  16. Algorithmic Processes for Increasing Design Efficiency.

    ERIC Educational Resources Information Center

    Terrell, William R.

    1983-01-01

    Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)

  17. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  18. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  19. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  20. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  1. An adaptive algorithm for noise rejection.

    PubMed

    Lovelace, D E; Knoebel, S B

    1978-01-01

    An adaptive algorithm for the rejection of noise artifact in 24-hour ambulatory electrocardiographic recordings is described. The algorithm is based on increased amplitude distortion or increased frequency of fluctuations associated with an episode of noise artifact. The results of application of the noise rejection algorithm on a high noise population of test tapes are discussed.

  2. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  3. Verification of IEEE Compliant Subtractive Division Algorithms

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Leathrum, James F., Jr.

    1996-01-01

    A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.

  4. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  5. Parallelized dilate algorithm for remote sensing image.

    PubMed

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm.

  6. Alternative learning algorithms for feedforward neural networks

    SciTech Connect

    Vitela, J.E.

    1996-03-01

    The efficiency of the back propagation algorithm to train feed forward multilayer neural networks has originated the erroneous belief among many neural networks users, that this is the only possible way to obtain the gradient of the error in this type of networks. The purpose of this paper is to show how alternative algorithms can be obtained within the framework of ordered partial derivatives. Two alternative forward-propagating algorithms are derived in this work which are mathematically equivalent to the BP algorithm. This systematic way of obtaining learning algorithms illustrated with this particular type of neural networks can also be used with other types such as recurrent neural networks.

  7. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  8. Is there a best hyperspectral detection algorithm?

    NASA Astrophysics Data System (ADS)

    Manolakis, D.; Lockwood, R.; Cooley, T.; Jacobson, J.

    2009-05-01

    A large number of hyperspectral detection algorithms have been developed and used over the last two decades. Some algorithms are based on highly sophisticated mathematical models and methods; others are derived using intuition and simple geometrical concepts. The purpose of this paper is threefold. First, we discuss the key issues involved in the design and evaluation of detection algorithms for hyperspectral imaging data. Second, we present a critical review of existing detection algorithms for practical hyperspectral imaging applications. Finally, we argue that the "apparent" superiority of sophisticated algorithms with simulated data or in laboratory conditions, does not necessarily translate to superiority in real-world applications.

  9. Color sorting algorithm based on K-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, BaoFeng; Huang, Qian

    2009-11-01

    In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.

  10. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  11. New algorithms for binary wavefront optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolong; Kner, Peter

    2015-03-01

    Binary amplitude modulation promises to allow rapid focusing through strongly scattering media with a large number of segments due to the faster update rates of digital micromirror devices (DMDs) compared to spatial light modulators (SLMs). While binary amplitude modulation has a lower theoretical enhancement than phase modulation, the faster update rate should more than compensate for the difference - a factor of π2 /2. Here we present two new algorithms, a genetic algorithm and a transmission matrix algorithm, for optimizing the focus with binary amplitude modulation that achieve enhancements close to the theoretical maximum. Genetic algorithms have been shown to work well in noisy environments and we show that the genetic algorithm performs better than a stepwise algorithm. Transmission matrix algorithms allow complete characterization and control of the medium but require phase control either at the input or output. Here we introduce a transmission matrix algorithm that works with only binary amplitude control and intensity measurements. We apply these algorithms to binary amplitude modulation using a Texas Instruments Digital Micromirror Device. Here we report an enhancement of 152 with 1536 segments (9.90%×N) using a genetic algorithm with binary amplitude modulation and an enhancement of 136 with 1536 segments (8.9%×N) using an intensity-only transmission matrix algorithm.

  12. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  13. A compilation of jet finding algorithms

    SciTech Connect

    Flaugher, B.; Meier, K.

    1992-12-31

    Technical descriptions of jet finding algorithms currently in use in p{anti p} collider experiments (CDF, UA1, UA2), e{sup +}e{sup {minus}} experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. For the hadron collider experiments, the clustering methods fall into two categories: cone algorithms and nearest-neighbor algorithms. In addition, UA2 has employed a combination of both methods for some analysis. While there are clearly differences between the cone and nearest-neighbor algorithms, the authors have found that there are also differences among the cone algorithms in the details of how the centroid of a cone cluster is located and how the E{sub T} and P{sub T} of the jet are defined. The most commonly used jet algorithm in electron-positron experiments is the JADE-type cluster algorithm. Five various incarnations of this approach have been described.

  14. A synthesized heuristic task scheduling algorithm.

    PubMed

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance.

  15. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  16. Wire Detection Algorithms for Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.

    2002-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning

  17. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  18. Algorithms for skiascopy measurement automatization

    NASA Astrophysics Data System (ADS)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  19. An efficient parallel termination detection algorithm

    SciTech Connect

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.

  20. Region processing algorithm for HSTAMIDS

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, Dominic K. C.

    2006-05-01

    The AN/PSS-14 (a.k.a. HSTAMIDS) has been tested for its performance in South East Asia, Thailand), South Africa (Namibia) and in November of 2005 in South West Asia (Afghanistan). The system has been proven effective in manual demining particularly in discriminating indigenous, metallic artifacts in the minefields. The Humanitarian Demining Research and Development (HD R&D) Program has sought to further improve the system to address specific needs in several areas. One particular area of these improvement efforts is the development of a mine detection/discrimination improvement software algorithm called Region Processing (RP). RP is an innovative technique in processing and is designed to work on a set of data acquired in a unique sweep pattern over a region-of-interest (ROI). The RP team is a joint effort consisting of three universities (University of Florida, University of Missouri, and Duke University), but is currently being led by the University of Florida. This paper describes the state-of-the-art Region Processing algorithm, its implementation into the current HSTAMIDS system, and its most recent test results.

  1. Enhanced algorithms for stochastic programming

    SciTech Connect

    Krishna, A.S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.

  2. Quantum Algorithms for Fermionic Simulations

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo

    2001-06-01

    The probabilistic simulation of quantum systems in classical computers is known to be limited by the so-called sign or phase problem, a problem believed to be of exponential complexity. This ``disease" manifests itself by the exponentially hard task of estimating the expectation value of an observable with a given error. Therefore, probabilistic simulations on a classical computer do not seem to qualify as a practical computational scheme for general quantum many-body problems. The limiting factors, for whatever reasons, are negative or complex-valued probabilities whether the simulations are done in real or imaginary time. In 1981 Richard Feynman raised some provocative questions in connection to the ``exact imitation'' of such systems using a special device named a ``quantum computer.'' Feynman hesitated about the possibility of imitating fermion systems using such a device. Here we address some of his concerns and, in particular, investigate the simulation of fermionic systems. We show how quantum algorithms avoid the sign problem by reducing the complexity from exponential to polynomial. Our demonstration is based upon the use of isomorphisms of *-algebras (spin-particle transformations) which connect different models of quantum computation. In particular, we present fermionic models (the fabled ``Grassmann Chip''); but, of course, these models are not the only ones since our spin-particle connections allow us to introduce more ``esoteric'' models of computation. We present specific quantum algorithms that illustrate the main points of our algebraic approach.

  3. Ligand Identification Scoring Algorithm (LISA)

    PubMed Central

    Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101

  4. The Aquarius Salinity Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank; Hilburn, Kyle; Lagerloef, Gary; Le Vine, David

    2012-01-01

    The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration [2] converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to molecular oxygen, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind, which is addressed in more detail in section 3. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water [3], [4] and an auxiliary field for the sea surface temperature. In the current processing only v-pol TB are used for this last step.

  5. A Breeder Algorithm for Stellarator Optimization

    NASA Astrophysics Data System (ADS)

    Wang, S.; Ware, A. S.; Hirshman, S. P.; Spong, D. A.

    2003-10-01

    An optimization algorithm that combines the global parameter space search properties of a genetic algorithm (GA) with the local parameter search properties of a Levenberg-Marquardt (LM) algorithm is described. Optimization algorithms used in the design of stellarator configurations are often classified as either global (such as GA and differential evolution algorithm) or local (such as LM). While nonlinear least-squares methods such as LM are effective at minimizing a cost-function based on desirable plasma properties such as quasi-symmetry and ballooning stability, whether or not this is a local or global minimum is unknown. The advantage of evolutionary algorithms such as GA is that they search a wider range of parameter space and are not susceptible to getting stuck in a local minimum of the cost function. Their disadvantage is that in some cases the evolutionary algorithms are ineffective at finding a minimum state. Here, we describe the initial development of the Breeder Algorithm (BA). BA consists of a genetic algorithm outer loop with an inner loop in which each generation is refined using a LM step. Initial results for a quasi-poloidal stellarator optimization will be presented, along with a comparison to existing optimization algorithms.

  6. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  7. On mapping systolic algorithms onto the hypercube

    SciTech Connect

    Ibarra, O.H.; Sohn, S.M. )

    1990-01-01

    Much effort has been devoted toward developing efficient algorithms for systolic arrays. Here the authors consider the problem of mapping these algorithms into efficient algorithms for a fixed-size hypercube architecture. They describe in detail several optimal implementations of algorithms given for one-way one and two-dimensional systolic arrays. Since interprocessor communication is many times slower than local computation in parallel computers built to date, the problem of efficient communication is specifically addressed for these mappings. In order to experimentally validate the technique, five systolic algorithms were mapped in various ways onto a 64-node NCUBE/7 MMD hypercube machine. The algorithms are for the following problems: the shuffle scheduling problem, finite impulse response filtering, linear context-free language recognition, matrix multiplication, and computing the Boolean transitive closure. Experimental evidence indicates that good performance is obtained for the mappings.

  8. Fast training algorithms for multilayer neural nets.

    PubMed

    Brent, R P

    1991-01-01

    An algorithm that is faster than back-propagation and for which it is not necessary to specify the number of hidden units in advance is described. The relationship with other fast pattern-recognition algorithms, such as algorithms based on k-d trees, is discussed. The algorithm has been implemented and tested on artificial problems, such as the parity problem, and on real problems arising in speech recognition. Experimental results, including training times and recognition accuracy, are given. Generally, the algorithm achieves accuracy as good as or better than nets trained using back-propagation. Accuracy is comparable to that for the nearest-neighbor algorithm, which is slower and requires more storage space.

  9. Visualizing output for a data learning algorithm

    NASA Astrophysics Data System (ADS)

    Carson, Daniel; Graham, James; Ternovskiy, Igor

    2016-05-01

    This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.

  10. A novel chaos danger model immune algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Qingyang; Wang, Song; Zhang, Li; Liang, Ying

    2013-11-01

    Making use of ergodicity and randomness of chaos, a novel chaos danger model immune algorithm (CDMIA) is presented by combining the benefits of chaos and danger model immune algorithm (DMIA). To maintain the diversity of antibodies and ensure the performances of the algorithm, two chaotic operators are proposed. Chaotic disturbance is used for updating the danger antibody to exploit local solution space, and the chaotic regeneration is referred to the safe antibody for exploring the entire solution space. In addition, the performances of the algorithm are examined based upon several benchmark problems. The experimental results indicate that the diversity of the population is improved noticeably, and the CDMIA exhibits a higher efficiency than the danger model immune algorithm and other optimization algorithms.

  11. Adaptive link selection algorithms for distributed estimation

    NASA Astrophysics Data System (ADS)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  12. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  13. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  14. Realization of a scalable Shor algorithm.

    PubMed

    Monz, Thomas; Nigg, Daniel; Martinez, Esteban A; Brandl, Matthias F; Schindler, Philipp; Rines, Richard; Wang, Shannon X; Chuang, Isaac L; Blatt, Rainer

    2016-03-01

    Certain algorithms for quantum computers are able to outperform their classical counterparts. In 1994, Peter Shor came up with a quantum algorithm that calculates the prime factors of a large number vastly more efficiently than a classical computer. For general scalability of such algorithms, hardware, quantum error correction, and the algorithmic realization itself need to be extensible. Here we present the realization of a scalable Shor algorithm, as proposed by Kitaev. We factor the number 15 by effectively employing and controlling seven qubits and four "cache qubits" and by implementing generalized arithmetic operations, known as modular multipliers. This algorithm has been realized scalably within an ion-trap quantum computer and returns the correct factors with a confidence level exceeding 99%. PMID:26941315

  15. Orbital objects detection algorithm using faint streaks

    NASA Astrophysics Data System (ADS)

    Tagawa, Makoto; Yanagisawa, Toshifumi; Kurosaki, Hirohisa; Oda, Hiroshi; Hanada, Toshiya

    2016-02-01

    This study proposes an algorithm to detect orbital objects that are small or moving at high apparent velocities from optical images by utilizing their faint streaks. In the conventional object-detection algorithm, a high signal-to-noise-ratio (e.g., 3 or more) is required, whereas in our proposed algorithm, the signals are summed along the streak direction to improve object-detection sensitivity. Lower signal-to-noise ratio objects were detected by applying the algorithm to a time series of images. The algorithm comprises the following steps: (1) image skewing, (2) image compression along the vertical axis, (3) detection and determination of streak position, (4) searching for object candidates using the time-series streak-position data, and (5) selecting the candidate with the best linearity and reliability. Our algorithm's ability to detect streaks with signals weaker than the background noise was confirmed using images from the Australia Remote Observatory.

  16. [Algorithm for treating preoperative anemia].

    PubMed

    Bisbe Vives, E; Basora Macaya, M

    2015-06-01

    Hemoglobin optimization and treatment of preoperative anemia in surgery with a moderate to high risk of surgical bleeding reduces the rate of transfusions and improves hemoglobin levels at discharge and can also improve postoperative outcomes. To this end, we need to schedule preoperative visits sufficiently in advance to treat the anemia. The treatment algorithm we propose comes with a simple checklist to determine whether we should refer the patient to a specialist or if we can treat the patient during the same visit. With the blood count test and additional tests for iron metabolism, inflammation parameter and glomerular filtration rate, we can decide whether to start the treatment with intravenous iron alone or erythropoietin with or without iron. With significant anemia, a visit after 15 days might be necessary to observe the response and supplement the treatment if required. The hemoglobin objective will depend on the type of surgery and the patient's characteristics.

  17. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  18. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  19. Improved Heat-Stress Algorithm

    NASA Technical Reports Server (NTRS)

    Teets, Edward H., Jr.; Fehn, Steven

    2007-01-01

    NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.

  20. Online Planning Algorithms for POMDPs

    PubMed Central

    Ross, Stéphane; Pineau, Joelle; Paquet, Sébastien; Chaib-draa, Brahim

    2009-01-01

    Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. PMID:19777080

  1. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  2. SLAP lesions: a treatment algorithm.

    PubMed

    Brockmeyer, Matthias; Tompkins, Marc; Kohn, Dieter M; Lorbach, Olaf

    2016-02-01

    Tears of the superior labrum involving the biceps anchor are a common entity, especially in athletes, and may highly impair shoulder function. If conservative treatment fails, successful arthroscopic repair of symptomatic SLAP lesions has been described in the literature particularly for young athletes. However, the results in throwing athletes are less successful with a significant amount of patients who will not regain their pre-injury level of performance. The clinical results of SLAP repairs in middle-aged and older patients are mixed, with worse results and higher revision rates as compared to younger patients. In this population, tenotomy or tenodesis of the biceps tendon is a viable alternative to SLAP repairs in order to improve clinical outcomes. The present article introduces a treatment algorithm for SLAP lesions based upon the recent literature as well as the authors' clinical experience. The type of lesion, age of patient, concomitant lesions, and functional requirements, as well as sport activity level of the patient, need to be considered. Moreover, normal variations and degenerative changes in the SLAP complex have to be distinguished from "true" SLAP lesions in order to improve results and avoid overtreatment. The suggestion for a treatment algorithm includes: type I: conservative treatment or arthroscopic debridement, type II: SLAP repair or biceps tenotomy/tenodesis, type III: resection of the instable bucket-handle tear, type IV: SLAP repair (biceps tenotomy/tenodesis if >50 % of biceps tendon is affected), type V: Bankart repair and SLAP repair, type VI: resection of the flap and SLAP repair, and type VII: refixation of the anterosuperior labrum and SLAP repair.

  3. Evolutionary Algorithm for Optimal Vaccination Scheme

    NASA Astrophysics Data System (ADS)

    Parousis-Orthodoxou, K. J.; Vlachos, D. S.

    2014-03-01

    The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.

  4. An Intrusion Detection Algorithm Based On NFPA

    NASA Astrophysics Data System (ADS)

    Anming, Zhong

    A process oriented intrusion detection algorithm based on Probabilistic Automaton with No Final probabilities (NFPA) is introduced, system call sequence of process is used as the source data. By using information in system call sequence of normal process and system call sequence of anomaly process, the anomaly detection and the misuse detection are efficiently combined. Experiments show better performance of our algorithm compared to the classical algorithm in this field.

  5. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  6. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  7. MRCK_3D contact detonation algorithm

    SciTech Connect

    Rougier, Esteban; Munjiza, Antonio

    2010-01-01

    Large-scale Combined Finite-Discrete Element Methods (FEM-DEM) and Discrete Element Methods (DEM) simulations involving contact of a large number of separate bod ies need an efficient, robust and flexible contact detection algorithm. In this work the MRCK-3D search algorithm is outlined and its main CPU perfonnances are evaluated. One of the most important aspects of this newly developed search algorithm is that it is applicable to systems consisting of many bodies of different shapes and sizes.

  8. Frontal optimization algorithms for multiprocessor computers

    SciTech Connect

    Sergienko, I.V.; Gulyanitskii, L.F.

    1981-11-01

    The authors describe one of the approaches to the construction of locally optimal optimization algorithms on multiprocessor computers. Algorithms of this type, called frontal, have been realized previously on single-processor computers, although this configuration does not fully exploit the specific features of their computational scheme. Experience with a number of practical discrete optimization problems confirms that the frontal algorithms are highly successful even with single-processor computers. 9 references.

  9. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  10. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  11. Streamwise Upwind, Moving-Grid Flow Algorithm

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.; Guruswamy, Guru P.; Obayashi, Shigeru

    1992-01-01

    Extension to moving grids enables computation of transonic flows about moving bodies. Algorithm computes unsteady transonic flow on basis of nondimensionalized thin-layer Navier-Stokes equations in conservation-law form. Solves equations by use of computational grid based on curvilinear coordinates conforming to, and moving with, surface(s) of solid body or bodies in flow field. Simulates such complicated phenomena as transonic flow (including shock waves) about oscillating wing. Algorithm developed by extending prior streamwise upwind algorithm solving equations on fixed curvilinear grid described in "Streamwise Algorithm for Simulation of Flow" (ARC-12718).

  12. Compression algorithm for multideterminant wave functions.

    PubMed

    Weerasinghe, Gihan L; Ríos, Pablo López; Needs, Richard J

    2014-02-01

    A compression algorithm is introduced for multideterminant wave functions which can greatly reduce the number of determinants that need to be evaluated in quantum Monte Carlo calculations. We have devised an algorithm with three levels of compression, the least costly of which yields excellent results in polynomial time. We demonstrate the usefulness of the compression algorithm for evaluating multideterminant wave functions in quantum Monte Carlo calculations, whose computational cost is reduced by factors of between about 2 and over 25 for the examples studied. We have found evidence of sublinear scaling of quantum Monte Carlo calculations with the number of determinants when the compression algorithm is used.

  13. Java implementation of Class Association Rule algorithms

    2007-08-30

    Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix andmore » a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.« less

  14. Ascent guidance algorithm using lidar wind measurements

    NASA Technical Reports Server (NTRS)

    Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.

    1990-01-01

    The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.

  15. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  16. Algorithm to search for genomic rearrangements

    NASA Astrophysics Data System (ADS)

    Nałecz-Charkiewicz, Katarzyna; Nowak, Robert

    2013-10-01

    The aim of this article is to discuss the issue of comparing nucleotide sequences in order to detect chromosomal rearrangements (for example, in the study of genomes of two cucumber varieties, Polish and Chinese). Two basic algorithms for detecting rearrangements has been described: Smith-Waterman algorithm, as well as a new method of searching genetic markers in combination with Knuth-Morris-Pratt algorithm. The computer program in client-server architecture was developed. The algorithms properties were examined on genomes Escherichia coli and Arabidopsis thaliana genomes, and are prepared to compare two cucumber varieties, Polish and Chinese. The results are promising and further works are planned.

  17. A simple greedy algorithm for reconstructing pedigrees.

    PubMed

    Cowell, Robert G

    2013-02-01

    This paper introduces a simple greedy algorithm for searching for high likelihood pedigrees using micro-satellite (STR) genotype information on a complete sample of related individuals. The core idea behind the algorithm is not new, but it is believed that putting it into a greedy search setting, and specifically the application to pedigree learning, is novel. The algorithm does not require age or sex information, but this information can be incorporated if desired. The algorithm is applied to human and non-human genetic data and in a simulation study. PMID:23164633

  18. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320

  19. Generation of attributes for learning algorithms

    SciTech Connect

    Hu, Yuh-Jyh; Kibler, D.

    1996-12-31

    Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.

  20. Java implementation of Class Association Rule algorithms

    SciTech Connect

    Tamura, Makio

    2007-08-30

    Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix and a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.

  1. Distilling the Verification Process for Prognostics Algorithms

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai

    2013-01-01

    The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.

  2. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  3. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  4. Thermostat algorithm for generating target ensembles

    NASA Astrophysics Data System (ADS)

    Bravetti, A.; Tapias, D.

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  5. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  6. The performance of the progressive resolution optimizer (PRO) for RapidArc planning in targets with low-density media.

    PubMed

    Kan, Monica W K; Leung, Lucullus H T; Yu, Peter K N

    2013-01-01

    A new version of progressive resolution optimizer (PRO) with an option of air cavity correction has been implemented for RapidArc volumetric-modulated arc therapy (RA). The purpose of this study was to compare the performance of this new PRO with the use of air cavity correction option (PRO10_air) against the one without the use of the air cavity correction option (PRO10_no-air) for RapidArc planning in targets with low-density media of different sizes and complexities. The performance of PRO10_no-air and PRO10_air was initially compared using single-arc plans created for four different simple heterogeneous phantoms with virtual targets and organs at risk. Multiple-arc planning of 12 real patients having nasopharyngeal carcinomas (NPC) and ten patients having non-small cell lung cancer (NSCLC) were then performed using the above two options for further comparison. Dose calculations were performed using both the Acuros XB (AXB) algorithm with the dose to medium option and the analytical anisotropic algorithm (AAA). The effect of using intermediate dose option after the first optimization cycle in PRO10_air and PRO10_no-air was also investigated and compared. Plans were evaluated and compared using target dose coverage, critical organ sparing, conformity index, and dose homogeneity index. For NSCLC cases or cases for which large volumes of low-density media were present in or adjacent to the target volume, the use of the air cavity correction option in PRO10 was shown to be beneficial. For NPC cases or cases for which small volumes of both low- and high-density media existed in the target volume, the use of air cavity correction in PRO10 did not improve the plan quality. Based on the AXB dose calculation results, the use of PRO10_air could produce up to 18% less coverage to the bony structures of the planning target volumes for NPC cases. When the intermediate dose option in PRO10 was used, there was negligible difference observed in plan quality between

  7. Stereotactic Ablative Radiation Therapy for Subcentimeter Lung Tumors: Clinical, Dosimetric, and Image Guidance Considerations

    SciTech Connect

    Louie, Alexander V.; Senan, Suresh; Dahele, Max; Slotman, Ben J.; Verbakel, Wilko F.A.R.

    2014-11-15

    Purpose: Use of stereotactic ablative radiation therapy (SABR) for subcentimeter lung tumors is controversial. We report our outcomes for tumors with diameter ≤1 cm and their visibility on cone beam computed tomography (CBCT) scans and retrospectively evaluate the planned dose using a deterministic dose calculation algorithm (Acuros XB [AXB]). Methods and Materials: We identified subcentimeter tumors from our institutional SABR database. Tumor size was remeasured on an artifact-free phase of the planning 4-dimensional (4D)-CT. Clinical plan doses were generated using either a pencil beam convolution or an anisotropic analytic algorithm (AAA). All AAA plans were recalculated using AXB, and differences among D95 and mean dose for internal target volume (ITV) and planning target volume (PTV) on the average intensity CT dataset, as well as for gross tumor volume (GTV) on the end respiratory phases were reported. For all AAA patients, CBCT scans acquired during each treatment fraction were evaluated for target visibility. Progression-free and overall survival rates were calculated using the Kaplan-Meier method. Results: Thirty-five patients with 37 subcentimeter tumors were eligible for analysis. For the 22 AAA plans recalculated using AXB, Mean D95 ± SD values were 2.2 ± 4.4% (ITV) and 2.5 ± 4.8% (PTV) lower using AXB; whereas mean doses were 2.9 ± 4.9% (ITV) and 3.7 ± 5.1% (PTV) lower. Calculated AXB doses were significantly lower in one patient (difference in mean ITV and PTV doses, as well as in mean ITV and PTV D95 ranged from 22%-24%). However, the end respiratory phase GTV received at least 95% of the prescription dose. Review of 92 CBCT scans from all AAA patients revealed that the tumor was visualized in 82 images, and its position could be inferred in other images. The 2-year local progression-free survival was 100%. Conclusions: Patients with subcentimeter lung tumors are good candidates for SABR, given the dosimetry, ability to localize

  8. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  9. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  10. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  11. ICESat Waveform Ground Processing Algorithm

    NASA Astrophysics Data System (ADS)

    Roberts, L.; Zwally, H.; Brenner, A. C.; Saba, J.; Yi, D.

    2003-12-01

    Gaussian to determine the mean surface elevation. We present algorithms that use single or double Gaussians to fit the return waveform and show how the mean elevation and surface characteristics are calculated from the functional fit. The initial estimates and covariance matrix are set to optimize the fit to the leading edge of the return waveform corresponding to the largest Gaussian peak. Over ice surfaces, two Gaussian peaks are allowed to account for the extended tail of the returns that have high forward scattering components, or two distinct surfaces in the footprint. Over land, up to six Gaussian peaks are allowed. The algorithm was fine tuned using the first 36 days of data, which included returns over the ice regions with high detector/amplifier saturation and strong atmospheric forward scattering.

  12. Control algorithms for dynamic attenuators

    SciTech Connect

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-15

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  13. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  14. Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm

    NASA Astrophysics Data System (ADS)

    Choi, Shinkook; Baek, Jongduk

    2015-03-01

    In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.

  15. Localization Algorithms of Underwater Wireless Sensor Networks: A Survey

    PubMed Central

    Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng

    2012-01-01

    In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs. PMID:22438752

  16. Gaining Algorithmic Insight through Simplifying Constraints.

    ERIC Educational Resources Information Center

    Ginat, David

    2002-01-01

    Discusses algorithmic problem solving in computer science education, particularly algorithmic insight, and focuses on the relevance and effectiveness of the heuristic simplifying constraints which involves simplification of a given problem to a problem in which constraints are imposed on the input data. Presents three examples involving…

  17. Force-Control Algorithm for Surface Sampling

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Quadrelli, Marco B.; Phan, Linh

    2008-01-01

    A G-FCON algorithm is designed for small-body surface sampling. It has a linearization component and a feedback component to enhance performance. The algorithm regulates the contact force between the tip of a robotic arm attached to a spacecraft and a surface during sampling.

  18. Advancing-Front Algorithm For Delaunay Triangulation

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1993-01-01

    Efficient algorithm performs Delaunay triangulation to generate unstructured grids for use in computing two-dimensional flows. Once grid generated, one can optionally call upon additional subalgorithm that removes diagonal lines from quadrilateral cells nearly rectangular. Resulting approximately rectangular grid reduces cost per iteration of flow-computing algorithm.

  19. Fast proximity algorithm for MAP ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Li, Si; Krol, Andrzej; Shen, Lixin; Xu, Yuesheng

    2012-03-01

    We arrived at the fixed-point formulation of the total variation maximum a posteriori (MAP) regularized emission computed tomography (ECT) reconstruction problem and we proposed an iterative alternating scheme to numerically calculate the fixed point. We theoretically proved that our algorithm converges to unique solutions. Because the obtained algorithm exhibits slow convergence speed, we further developed the proximity algorithm in the transformed image space, i.e. the preconditioned proximity algorithm. We used the bias-noise curve method to select optimal regularization hyperparameters for both our algorithm and expectation maximization with total variation regularization (EM-TV). We showed in the numerical experiments that our proposed algorithms, with an appropriately selected preconditioner, outperformed conventional EM-TV algorithm in many critical aspects, such as comparatively very low noise and bias for Shepp-Logan phantom. This has major ramification for nuclear medicine because clinical implementation of our preconditioned fixed-point algorithms might result in very significant radiation dose reduction in the medical applications of emission tomography.

  20. Genetic algorithms and the immune system

    SciTech Connect

    Forrest, S. . Dept. of Computer Science); Perelson, A.S. )

    1990-01-01

    Using genetic algorithm techniques we introduce a model to examine the hypothesis that antibody and T cell receptor genes evolved so as to encode the information needed to recognize schemas that characterize common pathogens. We have implemented the algorithm on the Connection Machine for 16,384 64-bit antigens and 512 64-bit antibodies. 8 refs.

  1. Perturbation resilience and superiorization of iterative algorithms

    NASA Astrophysics Data System (ADS)

    Censor, Y.; Davidi, R.; Herman, G. T.

    2010-06-01

    Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image.

  2. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  3. Pitch-Learning Algorithm For Speech Encoders

    NASA Technical Reports Server (NTRS)

    Bhaskar, B. R. Udaya

    1988-01-01

    Adaptive algorithm detects and corrects errors in sequence of estimates of pitch period of speech. Algorithm operates in conjunction with techniques used to estimate pitch period. Used in such parametric and hybrid speech coders as linear predictive coders and adaptive predictive coders.

  4. Quantum Algorithm for Linear Programming Problems

    NASA Astrophysics Data System (ADS)

    Joag, Pramod; Mehendale, Dhananjay

    The quantum algorithm (PRL 103, 150502, 2009) solves a system of linear equations with exponential speedup over existing classical algorithms. We show that the above algorithm can be readily adopted in the iterative algorithms for solving linear programming (LP) problems. The first iterative algorithm that we suggest for LP problem follows from duality theory. It consists of finding nonnegative solution of the equation forduality condition; forconstraints imposed by the given primal problem and for constraints imposed by its corresponding dual problem. This problem is called the problem of nonnegative least squares, or simply the NNLS problem. We use a well known method for solving the problem of NNLS due to Lawson and Hanson. This algorithm essentially consists of solving in each iterative step a new system of linear equations . The other iterative algorithms that can be used are those based on interior point methods. The same technique can be adopted for solving network flow problems as these problems can be readily formulated as LP problems. The suggested quantum algorithm cansolveLP problems and Network Flow problems of very large size involving millions of variables.

  5. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  6. Evaluation of TCP congestion control algorithms.

    SciTech Connect

    Long, Robert Michael

    2003-12-01

    Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.

  7. The [Gamma] Algorithm and Some Applications

    ERIC Educational Resources Information Center

    Castillo, Enrique; Jubete, Francisco

    2004-01-01

    In this paper the power of the [gamma] algorithm for obtaining the dual of a given cone and some of its multiple applications is discussed. The meaning of each sequential tableau appearing during the process is interpreted. It is shown that each tableau contains the generators of the dual cone of a given cone and that the algorithm updates the…

  8. Excursion-Set-Mediated Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Noever, David; Baskaran, Subbiah

    1995-01-01

    Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.

  9. Derivative Free Gradient Projection Algorithms for Rotation

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2004-01-01

    A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…

  10. Explaining the Cross-Multiplication Algorithm

    ERIC Educational Resources Information Center

    Handa, Yuichi

    2009-01-01

    Many high-school mathematics teachers have likely been asked by a student, "Why does the cross-multiplication algorithm work?" It is a commonly used algorithm when dealing with proportion problems, conversion of units, or fractional linear equations. For most teachers, the explanation usually involves the idea of finding a common denominator--one…

  11. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  12. Performance analysis of cone detection algorithms.

    PubMed

    Mariotti, Letizia; Devaney, Nicholas

    2015-04-01

    Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758

  13. Kalman plus weights: a time scale algorithm

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2001-01-01

    KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.

  14. Algorithm for genome contig assembly. Final report

    SciTech Connect

    1995-09-01

    An algorithm was developed for genome contig assembly which extended the range of data types that could be included in assembly and which ran on the order of a hundred times faster than the algorithm it replaced. Maps of all existing cosmid clone and YAC data at the Human Genome Information Resource were assembled using ICA. The resulting maps are summarized.

  15. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  16. The Porter Stemming Algorithm: Then and Now

    ERIC Educational Resources Information Center

    Willett, Peter

    2006-01-01

    Purpose: In 1980, Porter presented a simple algorithm for stemming English language words. This paper summarises the main features of the algorithm, and highlights its role not just in modern information retrieval research, but also in a range of related subject domains. Design/methodology/approach: Review of literature and research involving use…

  17. Global Optimality of the Successive Maxbet Algorithm.

    ERIC Educational Resources Information Center

    Hanafi, Mohamed; ten Berge, Jos M. F.

    2003-01-01

    It is known that the Maxbet algorithm, which is an alternative to the method of generalized canonical correlation analysis and Procrustes analysis, may converge to local maxima. Discusses an eigenvalue criterion that is sufficient, but not necessary, for global optimality of the successive Maxbet algorithm. (SLD)

  18. A Stemming Algorithm for Latin Text Databases.

    ERIC Educational Resources Information Center

    Schinke, Robyn; And Others

    1996-01-01

    Describes the design of a stemming algorithm for searching Latin text databases. The algorithm uses a longest-match approach with some recoding but differs from most stemmers in its use of two separate suffix dictionaries for processing query and database words that enables users to pursue specific searches for single grammatical forms of words.…

  19. IUS guidance algorithm gamma guide assessment

    NASA Technical Reports Server (NTRS)

    Bray, R. E.; Dauro, V. A.

    1980-01-01

    The Gamma Guidance Algorithm which controls the inertial upper stage is described. The results of an independent assessment of the algorithm's performance in satisfying the NASA missions' targeting objectives are presented. The results of a launch window analysis for a Galileo mission, and suggested improvements are included.

  20. Formation Algorithms and Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward

    2004-01-01

    Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.

  1. Streamlining algorithms for complete adaptation

    NASA Technical Reports Server (NTRS)

    Erickson, J. C., Jr. (Editor); Chevallier, J. P.; Goodyer, Michael J.; Hornung, Hans G.; Mignosi, Andre; Sears, William R.; Smith, J.; Wedemeyer, Erich H.

    1990-01-01

    For purposes of the adaptive-wall algorithms to be described, the modern era is considered to have begun with the simultaneous, independent recognition of the concept of matching an experimental inner flow across an interface to a computed outer flow by Chevallier, Ferri, Goodyer, Lissaman, Rubbert, and Sears. Fundamental investigations of the adaptive-wall matching concept by means of numerical simulations and theoretical considerations are described. An overview of the development and operation of 2D adaptive-wall facilities from about 1970 until the present is given, followed by similar material for 3D adaptive-wall facilities from approximately 1978 until the present. A general formulation of adaptation strategy is presented, with a theoretical basis for adaptation followed by 2D flexible, impermeable-wall applications; 2D ventilated-wall applications; 3D flexible, impermeable-wall applications; and 3D ventilated-wall applications. Representative experimental and 3D results are given, with 2D, followed by a discussion of limitations and open questions.

  2. Genetic algorithms for route discovery.

    PubMed

    Gelenbe, Erol; Liu, Peixiang; Lainé, Jeremy

    2006-12-01

    Packet routing in networks requires knowledge about available paths, which can be either acquired dynamically while the traffic is being forwarded, or statically (in advance) based on prior information of a network's topology. This paper describes an experimental investigation of path discovery using genetic algorithms (GAs). We start with the quality-of-service (QoS)-driven routing protocol called "cognitive packet network" (CPN), which uses smart packets (SPs) to dynamically select routes in a distributed autonomic manner based on a user's QoS requirements. We extend it by introducing a GA at the source routers, which modifies and filters the paths discovered by the CPN. The GA can combine the paths that were previously discovered to create new untested but valid source-to-destination paths, which are then selected on the basis of their "fitness." We present an implementation of this approach, where the GA runs in background mode so as not to overload the ingress routers. Measurements conducted on a network test bed indicate that when the background-traffic load of the network is light to medium, the GA can result in improved QoS. When the background-traffic load is high, it appears that the use of the GA may be detrimental to the QoS experienced by users as compared to CPN routing because the GA uses less timely state information in its decision making.

  3. Genetic algorithms for route discovery.

    PubMed

    Gelenbe, Erol; Liu, Peixiang; Lainé, Jeremy

    2006-12-01

    Packet routing in networks requires knowledge about available paths, which can be either acquired dynamically while the traffic is being forwarded, or statically (in advance) based on prior information of a network's topology. This paper describes an experimental investigation of path discovery using genetic algorithms (GAs). We start with the quality-of-service (QoS)-driven routing protocol called "cognitive packet network" (CPN), which uses smart packets (SPs) to dynamically select routes in a distributed autonomic manner based on a user's QoS requirements. We extend it by introducing a GA at the source routers, which modifies and filters the paths discovered by the CPN. The GA can combine the paths that were previously discovered to create new untested but valid source-to-destination paths, which are then selected on the basis of their "fitness." We present an implementation of this approach, where the GA runs in background mode so as not to overload the ingress routers. Measurements conducted on a network test bed indicate that when the background-traffic load of the network is light to medium, the GA can result in improved QoS. When the background-traffic load is high, it appears that the use of the GA may be detrimental to the QoS experienced by users as compared to CPN routing because the GA uses less timely state information in its decision making. PMID:17186801

  4. The algorithmic origins of life

    PubMed Central

    Walker, Sara Imari; Davies, Paul C. W.

    2013-01-01

    Although it has been notoriously difficult to pin down precisely what is it that makes life so distinctive and remarkable, there is general agreement that its informational aspect is one key property, perhaps the key property. The unique informational narrative of living systems suggests that life may be characterized by context-dependent causal influences, and, in particular, that top-down (or downward) causation—where higher levels influence and constrain the dynamics of lower levels in organizational hierarchies—may be a major contributor to the hierarchal structure of living systems. Here, we propose that the emergence of life may correspond to a physical transition associated with a shift in the causal structure, where information gains direct and context-dependent causal efficacy over the matter in which it is instantiated. Such a transition may be akin to more traditional physical transitions (e.g. thermodynamic phase transitions), with the crucial distinction that determining which phase (non-life or life) a given system is in requires dynamical information and therefore can only be inferred by identifying causal architecture. We discuss some novel research directions based on this hypothesis, including potential measures of such a transition that may be amenable to laboratory study, and how the proposed mechanism corresponds to the onset of the unique mode of (algorithmic) information processing characteristic of living systems. PMID:23235265

  5. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  6. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  7. Intelligent perturbation algorithms for space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.

  8. Algorithms for improved performance in cryptographic protocols.

    SciTech Connect

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  9. A new algorithm for coding geological terminology

    NASA Astrophysics Data System (ADS)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  10. Marshall Rosenbluth and the Metropolis algorithm

    SciTech Connect

    Gubernatis, J.E.

    2005-05-15

    The 1953 publication, 'Equation of State Calculations by Very Fast Computing Machines' by N. Metropolis, A. W. Rosenbluth and M. N. Rosenbluth, and M. Teller and E. Teller [J. Chem. Phys. 21, 1087 (1953)] marked the beginning of the use of the Monte Carlo method for solving problems in the physical sciences. The method described in this publication subsequently became known as the Metropolis algorithm, undoubtedly the most famous and most widely used Monte Carlo algorithm ever published. As none of the authors made subsequent use of the algorithm, they became unknown to the large simulation physics community that grew from this publication and their roles in its development became the subject of mystery and legend. At a conference marking the 50th anniversary of the 1953 publication, Marshall Rosenbluth gave his recollections of the algorithm's development. The present paper describes the algorithm, reconstructs the historical context in which it was developed, and summarizes Marshall's recollections.

  11. A Learning Algorithm for Multimodal Grammar Inference.

    PubMed

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  12. Univariate time series forecasting algorithm validation

    NASA Astrophysics Data System (ADS)

    Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan

    2014-12-01

    Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.

  13. Research on algorithms for adaptive antenna arrays

    NASA Astrophysics Data System (ADS)

    Widrow, B.; Newman, W.; Gooch, R.; Duvall, K.; Shur, D.

    1981-08-01

    The fundamental efficiency of adaptive algorithms is analyzed. It is found that noise in the adaptive weights increases with convergence speed. This causes loss in mean-square-error performance. Efficiency is considered from the point of view of misadjustment versus speed of convergence. A new version of the LMS algorithm based on Newton's method is analyzed and shown to make maximally efficient use of real-time input data. The performance of this algorithm is not affected by eigenvalue disparity. Practical algorithms can be devised that closely approximate Newton's method. In certain cases, the steepest descent version of LMS performs as well as Newton's method. The efficiency of adaptive algorithms with nonstationary input environments is analyzed where signals, jammers, and background noises can be of a transient and nonstationary nature. A new adaptive filtering method for broadband adaptive beamforming is described which uses both poles and zeros in the adaptive signal filtering paths from the antenna elements to the final array output.

  14. Evolutionary development of path planning algorithms

    SciTech Connect

    Hage, M

    1998-09-01

    This paper describes the use of evolutionary software techniques for developing both genetic algorithms and genetic programs. Genetic algorithms are evolved to solve a specific problem within a fixed and known environment. While genetic algorithms can evolve to become very optimized for their task, they often are very specialized and perform poorly if the environment changes. Genetic programs are evolved through simultaneous training in a variety of environments to develop a more general controller behavior that operates in unknown environments. Performance of genetic programs is less optimal than a specially bred algorithm for an individual environment, but the controller performs acceptably under a wider variety of circumstances. The example problem addressed in this paper is evolutionary development of algorithms and programs for path planning in nuclear environments, such as Chernobyl.

  15. Basic firefly algorithm for document clustering

    NASA Astrophysics Data System (ADS)

    Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza

    2015-12-01

    The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).

  16. Improving the algorithm of temporal relation propagation

    NASA Astrophysics Data System (ADS)

    Shen, Jifeng; Xu, Dan; Liu, Tongming

    2005-03-01

    In the military Multi Agent System, every agent needs to analyze the temporal relationships among the tasks or combat behaviors, and it"s very important to reflect the battlefield situation in time. The temporal relation among agents is usually very complex, and we model it with interval algebra (IA) network. Therefore an efficient temporal reasoning algorithm is vital in battle MAS model. The core of temporal reasoning is path consistency algorithm, an efficient path consistency algorithm is necessary. In this paper we used the Interval Matrix Calculus (IMC) method to represent the temporal relation, and optimized the path consistency algorithm by improving the efficiency of propagation of temporal relation based on the Allen's path consistency algorithm.

  17. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  18. Exploration of new multivariate spectral calibration algorithms.

    SciTech Connect

    Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.

    2004-03-01

    A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.

  19. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  20. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  1. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  2. The hierarchical algorithms--theory and applications

    NASA Astrophysics Data System (ADS)

    Su, Zheng-Yao

    Monte Carlo simulations are one of the most important numerical techniques for investigating statistical physical systems. Among these systems, spin models are a typical example which also play an essential role in constructing the abstract mechanism for various complex systems. Unfortunately, traditional Monte Carlo algorithms are afflicted with "critical slowing down" near continuous phase transitions and the efficiency of the Monte Carlo simulation goes to zero as the size of the lattice is increased. To combat critical slowing down, a very different type of collective-mode algorithm, in contrast to the traditional single-spin-flipmode, was proposed by Swendsen and Wang in 1987 for Potts spin models. Since then, there has been an explosion of work attempting to understand, improve, or generalize it. In these so-called "cluster" algorithms, clusters of spin are regarded as one template and are updated at each step of the Monte Carlo procedure. In implementing these algorithms the cluster labeling is a major time-consuming bottleneck and is also isomorphic to the problem of computing connected components of an undirected graph seen in other application areas, such as pattern recognition.A number of cluster labeling algorithms for sequential computers have long existed. However, the dynamic irregular nature of clusters complicates the task of finding good parallel algorithms and this is particularly true on SIMD (single-instruction-multiple-data machines. Our design of the Hierarchical Cluster Labeling Algorithm aims at alleviating this problem by building a hierarchical structure on the problem domain and by incorporating local and nonlocal communication schemes. We present an estimate for the computational complexity of cluster labeling and prove the key features of this algorithm (such as lower computational complexity, data locality, and easy implementation) compared with the methods formerly known. In particular, this algorithm can be viewed as a generalized

  3. A new algorithm for attitude-independent magnetometer calibration

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Shuster, Malcolm D.

    1994-01-01

    A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.

  4. Parallelization of Edge Detection Algorithm using MPI on Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Haron, Nazleeni; Amir, Ruzaini; Aziz, Izzatdin A.; Jung, Low Tan; Shukri, Siti Rohkmah

    In this paper, we present the design of parallel Sobel edge detection algorithm using Foster's methodology. The parallel algorithm is implemented using MPI message passing library and master/slave algorithm. Every processor performs the same sequential algorithm but on different part of the image. Experimental results conducted on Beowulf cluster are presented to demonstrate the performance of the parallel algorithm.

  5. SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.

    1989-01-01

    The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.

  6. [Multispectral image compression algorithms for color reproduction].

    PubMed

    Liang, Wei; Zeng, Ping; Luo, Xue-mei; Wang, Yi-feng; Xie, Kun

    2015-01-01

    In order to improve multispectral images compression efficiency and further facilitate their storage and transmission for the application of color reproduction and so on, in which fields high color accuracy is desired, WF serial methods is proposed, and APWS_RA algorithm is designed. Then the WF_APWS_RA algorithm, which has advantages of low complexity, good illuminant stability and supporting consistent coior reproduction across devices, is presented. The conventional MSE based wavelet embedded coding principle is first studied. And then color perception distortion criterion and visual characteristic matrix W are proposed. Meanwhile, APWS_RA algorithm is formed by optimizing the. rate allocation strategy of APWS. Finally, combined above technologies, a new coding method named WF_APWS_RA is designed. Colorimetric error criterion is used in the algorithm and APWS_RA is applied on visual weighted multispectral image. In WF_APWS_RA, affinity propagation clustering is utilized to exploit spectral correlation of weighted image. Then two-dimensional wavelet transform is used to remove the spatial redundancy. Subsequently, error compensation mechanism and rate pre-allocation are combined to accomplish the embedded wavelet coding. Experimental results show that at the same bit rate, compared with classical coding algorithms, WF serial algorithms have better performance on color retention. APWS_RA preserves least spectral error and WF APWS_RA algorithm has obvious superiority on color accuracy.

  7. LCD motion blur: modeling, analysis, and algorithm.

    PubMed

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596

  8. Variable depth recursion algorithm for leaf sequencing

    SciTech Connect

    Siochi, R. Alfredo C.

    2007-02-15

    The processes of extraction and sweep are basic segmentation steps that are used in leaf sequencing algorithms. A modified version of a commercial leaf sequencer changed the way that the extracts are selected and expanded the search space, but the modification maintained the basic search paradigm of evaluating multiple solutions, each one consisting of up to 12 extracts and a sweep sequence. While it generated the best solutions compared to other published algorithms, it used more computation time. A new, faster algorithm selects one extract at a time but calls itself as an evaluation function a user-specified number of times, after which it uses the bidirectional sweeping window algorithm as the final evaluation function. To achieve a performance comparable to that of the modified commercial leaf sequencer, 2-3 calls were needed, and in all test cases, there were only slight improvements beyond two calls. For the 13 clinical test maps, computation speeds improved by a factor between 12 and 43, depending on the constraints, namely the ability to interdigitate and the avoidance of the tongue-and-groove under dose. The new algorithm was compared to the original and modified versions of the commercial leaf sequencer. It was also compared to other published algorithms for 1400, random, 15x15, test maps with 3-16 intensity levels. In every single case the new algorithm provided the best solution.

  9. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels. PMID:24212035

  10. Updated treatment algorithm of pulmonary arterial hypertension.

    PubMed

    Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne

    2013-12-24

    The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643

  11. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  12. Least significant qubit algorithm for quantum images

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Li, Qiong

    2016-08-01

    To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.

  13. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  14. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  15. Algorithm for dynamic Speckle pattern processing

    NASA Astrophysics Data System (ADS)

    Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.

    2016-07-01

    In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.

  16. Operational algorithm development and refinement approaches

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip E.

    2003-11-01

    Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that

  17. Design and implementation of parallel multigrid algorithms

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tuminaro, Ray S.

    1988-01-01

    Techniques for mapping multigrid algorithms to solve elliptic PDEs on hypercube parallel computers are described and demonstrated. The need for proper data mapping to minimize communication distances is stressed, and an execution-time model is developed to show how algorithm efficiency is affected by changes in the machine and algorithm parameters. Particular attention is then given to the case of coarse computational grids, which can lead to idle processors, load imbalances, and inefficient performance. It is shown that convergence can be improved by using idle processors to solve a new problem concurrently on the fine grid defined by a splitting.

  18. Quantum hyperparallel algorithm for matrix multiplication.

    PubMed

    Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan

    2016-01-01

    Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N(2)), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and "big data" analysis. PMID:27125586

  19. Quantum hyperparallel algorithm for matrix multiplication

    NASA Astrophysics Data System (ADS)

    Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan

    2016-04-01

    Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N2), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and “big data” analysis.

  20. On quantum algorithms for noncommutative hidden subgroups

    SciTech Connect

    Ettinger, M.; Hoeyer, P.

    1998-12-01

    Quantum algorithms for factoring and discrete logarithm have previously been generalized to finding hidden subgroups of finite Abelian groups. This paper explores the possibility of extending this general viewpoint to finding hidden subgroups of noncommutative groups. The authors present a quantum algorithm for the special case of dihedral groups which determines the hidden subgroup in a linear number of calls to the input function. They also explore the difficulties of developing an algorithm to process the data to explicitly calculate a generating set for the subgroup. A general framework for the noncommutative hidden subgroup problem is discussed and they indicate future research directions.

  1. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  2. Protein Structure Prediction with Evolutionary Algorithms

    SciTech Connect

    Hart, W.E.; Krasnogor, N.; Pelta, D.A.; Smith, J.

    1999-02-08

    Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.

  3. Quantum algorithms for quantum field theories.

    PubMed

    Jordan, Stephen P; Lee, Keith S M; Preskill, John

    2012-06-01

    Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm. PMID:22654052

  4. Algorithms for optimal dyadic decision trees

    SciTech Connect

    Hush, Don; Porter, Reid

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  5. Some multigrid algorithms for SIMD machines

    SciTech Connect

    Dendy, J.E. Jr.

    1996-12-31

    Previously a semicoarsening multigrid algorithm suitable for use on SIMD architectures was investigated. Through the use of new software tools, the performance of this algorithm has been considerably improved. The method has also been extended to three space dimensions. The method performs well for strongly anisotropic problems and for problems with coefficients jumping by orders of magnitude across internal interfaces. The parallel efficiency of this method is analyzed, and its actual performance on the CM-5 is compared with its performance on the CRAY-YMP. A standard coarsening multigrid algorithm is also considered, and we compare its performance on these two platforms as well.

  6. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  7. Algorithms for computing the multivariable stability margin

    NASA Technical Reports Server (NTRS)

    Tekawy, Jonathan A.; Safonov, Michael G.; Chiang, Richard Y.

    1989-01-01

    Stability margin for multiloop flight control systems has become a critical issue, especially in highly maneuverable aircraft designs where there are inherent strong cross-couplings between the various feedback control loops. To cope with this issue, we have developed computer algorithms based on non-differentiable optimization theory. These algorithms have been developed for computing the Multivariable Stability Margin (MSM). The MSM of a dynamical system is the size of the smallest structured perturbation in component dynamics that will destabilize the system. These algorithms have been coded and appear to be reliable. As illustrated by examples, they provide the basis for evaluating the robustness and performance of flight control systems.

  8. System engineering approach to GPM retrieval algorithms

    SciTech Connect

    Rose, C. R.; Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do

  9. A novel resistance iterative algorithm for CCOS

    NASA Astrophysics Data System (ADS)

    Zheng, Ligong; Zhang, Xuejun

    2006-08-01

    CCOS (Computer Control Optical Surfacing) technology is widely used for making aspheric mirrors. For most manufacturers, dwell time algorithm is usually employed to determine the route and dwell time of the small tools to converge the errors. In this article, a novel damp iterative algorithm is proposed. We chose revolutions of the small tool instead of dwell time to determine fabrication stratagem. By using resistance iterative algorithm, we can solve these revolutions. Several mirrors have been manufactured by this method, all of them have fulfilled the demand of the designers, a 1m aspheric mirror was finished within 3 months.

  10. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  11. Quantum algorithms for quantum field theories.

    PubMed

    Jordan, Stephen P; Lee, Keith S M; Preskill, John

    2012-06-01

    Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm.

  12. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-02-28

    We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  13. Data-parallel algorithms for image computing

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.

    1990-11-01

    Data-parallel algorithms for image computing on the Connection Machine are described. After a brief review of some basic programming concepts in *Lip, a parallel extension of Common Lisp, data-parallel programming paradigms based on a local (diffusion-like) model of computation, the scan model of computation, a general interprocessor communications model, and a region-based model are introduced. Algorithms for connected component labeling, distance transformation, Voronoi diagrams, finding minimum cost paths, local means, shape-from-shading, hidden surface calculations, affine transformation, oblique parallel projection, and spatial operations over regions are presented. An new algorithm for interpolating irregularly spaced data via Voronoi diagrams is also described.

  14. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-08-30

    We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  15. Finite pure integer programming algorithms employing only hyperspherically deduced cuts

    NASA Technical Reports Server (NTRS)

    Young, R. D.

    1971-01-01

    Three algorithms are developed that may be based exclusively on hyperspherically deduced cuts. The algorithms only apply, therefore, to problems structured so that these cuts are valid. The algorithms are shown to be finite.

  16. ANALYZING ENVIRONMENTAL IMPACTS WITH THE WAR ALGORITHM: REVIEW AND UPDATE

    EPA Science Inventory

    This presentation will review uses of the WAR algorithm and current developments and possible future directions. The WAR algorithm is a methodology for analyzing potential environmental impacts of 1600+ chemicals used in the chemical processing and other industries. The algorithm...

  17. Comprehensive dosimetric planning comparison for early-stage, non-small cell lung cancer with SABR: fixed-beam IMRT versus VMAT versus TomoTherapy.

    PubMed

    Xhaferllari, Ilma; El-Sherif, Omar; Gaede, Stewart

    2016-09-08

    Volumetric-modulated arc therapy (VMAT) is emerging as a leading technology in treating early-stage, non-small cell lung cancer (NSCLC) with stereotactic ablative radiotherapy (SABR). However, two other modalities capable of deliver-ing intensity-modulated radiation therapy (IMRT) include fixed-beam and helical TomoTherapy (HT). This study aims to provide an extensive dosimetric compari-son among these various IMRT techniques for treating early-stage NSCLC with SABR. Ten early-stage NSCLC patients were retrospectively optimized using three fixed-beam techniques via nine to eleven beams (high and low modulation step-and-shoot (SS), and sliding window (SW)), two VMAT techniques via two partial arcs (SmartArc (SA) and RapidArc (RA)), and three HT techniques via three different fan beam widths (1 cm, 2.5 cm, and 5 cm) for 80 plans total. Fixed-beam and VMAT plans were generated using flattening filter-free beams. SS and SA, HT treatment plans, and SW and RA were optimized using Pinnacle v9.1, Tomoplan v.3.1.1, and Eclipse (Acuros XB v11.3 algorithm), respectively. Dose-volume histogram statistics, dose conformality, and treatment delivery efficiency were analyzed. VMAT treatment plans achieved significantly lower values for contralat-eral lung V5Gy (p ≤ 0.05) compared to the HT plans, and significantly lower mean lung dose (p < 0.006) compared to HT 5 cm treatment plans. In the comparison between the VMAT techniques, a significant reduction in the total monitor units (p = 0.05) was found in the SA plans, while a significant decrease was observed in the dose falloff parameter, D2cm, (p = 0.05), for the RA treatments. The maximum cord dose was significantly reduced (p = 0.017) in grouped RA&SA plans com-pared to SS. Estimated treatment time was significantly higher for HT and fixed-beam plans compared to RA&SA (p < 0.001). Although, a significant difference was not observed in the RA vs. SA (p = 0.393). RA&SA outperformed HT in all parameters measured. Despite an

  18. Comprehensive dosimetric planning comparison for early-stage, non-small cell lung cancer with SABR: fixed-beam IMRT versus VMAT versus TomoTherapy.

    PubMed

    Xhaferllari, Ilma; El-Sherif, Omar; Gaede, Stewart

    2016-01-01

    Volumetric-modulated arc therapy (VMAT) is emerging as a leading technology in treating early-stage, non-small cell lung cancer (NSCLC) with stereotactic ablative radiotherapy (SABR). However, two other modalities capable of deliver-ing intensity-modulated radiation therapy (IMRT) include fixed-beam and helical TomoTherapy (HT). This study aims to provide an extensive dosimetric compari-son among these various IMRT techniques for treating early-stage NSCLC with SABR. Ten early-stage NSCLC patients were retrospectively optimized using three fixed-beam techniques via nine to eleven beams (high and low modulation step-and-shoot (SS), and sliding window (SW)), two VMAT techniques via two partial arcs (SmartArc (SA) and RapidArc (RA)), and three HT techniques via three different fan beam widths (1 cm, 2.5 cm, and 5 cm) for 80 plans total. Fixed-beam and VMAT plans were generated using flattening filter-free beams. SS and SA, HT treatment plans, and SW and RA were optimized using Pinnacle v9.1, Tomoplan v.3.1.1, and Eclipse (Acuros XB v11.3 algorithm), respectively. Dose-volume histogram statistics, dose conformality, and treatment delivery efficiency were analyzed. VMAT treatment plans achieved significantly lower values for contralat-eral lung V5Gy (p ≤ 0.05) compared to the HT plans, and significantly lower mean lung dose (p < 0.006) compared to HT 5 cm treatment plans. In the comparison between the VMAT techniques, a significant reduction in the total monitor units (p = 0.05) was found in the SA plans, while a significant decrease was observed in the dose falloff parameter, D2cm, (p = 0.05), for the RA treatments. The maximum cord dose was significantly reduced (p = 0.017) in grouped RA&SA plans com-pared to SS. Estimated treatment time was significantly higher for HT and fixed-beam plans compared to RA&SA (p < 0.001). Although, a significant difference was not observed in the RA vs. SA (p = 0.393). RA&SA outperformed HT in all parameters measured. Despite an

  19. A segmentation algorithm for noisy images

    SciTech Connect

    Xu, Y.; Olman, V.; Uberbacher, E.C.

    1996-12-31

    This paper presents a 2-D image segmentation algorithm and addresses issues related to its performance on noisy images. The algorithm segments an image by first constructing a minimum spanning tree representation of the image and then partitioning the spanning tree into sub-trees representing different homogeneous regions. The spanning tree is partitioned in such a way that the sum of gray-level variations over all partitioned subtrees is minimized under the constraints that each subtree has at least a specified number of pixels and two adjacent subtrees have significantly different ``average`` gray-levels. Two types of noise, transmission errors and Gaussian additive noise. are considered and their effects on the segmentation algorithm are studied. Evaluation results have shown that the segmentation algorithm is robust in the presence of these two types of noise.

  20. Genetic algorithms at UC Davis/LLNL

    SciTech Connect

    Vemuri, V.R.

    1993-12-31

    A tutorial introduction to genetic algorithms is given. This brief tutorial should serve the purpose of introducing the subject to the novice. The tutorial is followed by a brief commentary on the term project reports that follow.

  1. Advanced CHP Control Algorithms: Scope Specification

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  2. Modeling algorithm execution time on processor arrays

    NASA Technical Reports Server (NTRS)

    Adams, L. M.; Crockett, T. W.

    1984-01-01

    An approach to modelling the execution time of algorithms on parallel arrays is presented. This time is expressed as a function of the number of processors and system parameters. The resulting model has been applied to a parallel implementation of the conjugate-gradient algorithm on NASA's FEM. Results of experiments performed to compare the model predictions against actual behavior show that the floating-point arithmetic, communication, and synchronization components of the parallel algorithm execution time were correctly modelled. The results also show that the overhead caused by the interaction of the system software and the actual parallel hardware must be reflected in the model parameters. The model has been used to predict the performance of the conjugate gradient algorithm on a given problem as the number of processors and machine characteristics varied.

  3. Five-dimensional Janis-Newman algorithm

    NASA Astrophysics Data System (ADS)

    Erbin, Harold; Heurtier, Lucien

    2015-08-01

    The Janis-Newman algorithm has been shown to be successful in finding new stationary solutions of four-dimensional gravity. Attempts for a generalization to higher dimensions have already been found for the restricted cases with only one angular momentum. In this paper we propose an extension of this algorithm to five-dimensions with two angular momenta—using the prescription of Giampieri—through two specific examples, that are the Myers-Perry and BMPV black holes. We also discuss possible enlargements of our prescriptions to other dimensions and maximal number of angular momenta, and show how dimensions higher than six appear to be much more challenging to treat within this framework. Nonetheless this general algorithm provides a unification of the formulation in d=3,4,5 of the Janis-Newman algorithm, from which several examples are exposed, including the BTZ black hole.

  4. Adaptive computation algorithm for RBF neural network.

    PubMed

    Han, Hong-Gui; Qiao, Jun-Fei

    2012-02-01

    A novel learning algorithm is proposed for nonlinear modelling and identification using radial basis function neural networks. The proposed method simplifies neural network training through the use of an adaptive computation algorithm (ACA). In addition, the convergence of the ACA is analyzed by the Lyapunov criterion. The proposed algorithm offers two important advantages. First, the model performance can be significantly improved through ACA, and the modelling error is uniformly ultimately bounded. Secondly, the proposed ACA can reduce computational cost and accelerate the training speed. The proposed method is then employed to model classical nonlinear system with limit cycle and to identify nonlinear dynamic system, exhibiting the effectiveness of the proposed algorithm. Computational complexity analysis and simulation results demonstrate its effectiveness.

  5. Alignment algorithms for planar optical waveguides

    NASA Astrophysics Data System (ADS)

    Zheng, Yu; Duan, Ji-an

    2012-10-01

    Planar optical waveguides are the key elements in a modern, high-speed optical network. An important problem facing the optical fiber communication system is optical-axis alignment and coupling between waveguide chips and transmission fibers. The advantages and disadvantages of the various algorithms used for the optical-axis alignment, namely, hill-climbing, pattern search, and genetic algorithm are analyzed. A new optical-axis alignment for planar optical waveguides is presented which is a composite of a genetic algorithm and a pattern search algorithm. Experiments have proved the proposed alignment's feasibility; compared with hill climbing, the search process can reduce the number of movements by 88% and reduce the search time by 83%. Moreover, the search success rate in the experiment can reach 100%.

  6. The Algorithms of Euclid and Jacobi

    ERIC Educational Resources Information Center

    Johnson, R. W.; Waterman, M. S.

    1976-01-01

    In a thesis written for the Doctor of Arts in Mathematics, the connection between Euclid's algorithm and continued fractions is developed and extended to n dimensions. Applications to computer sciences are noted. (SD)

  7. Quality control algorithms for rainfall measurements

    NASA Astrophysics Data System (ADS)

    Golz, Claudia; Einfalt, Thomas; Gabella, Marco; Germann, Urs

    2005-09-01

    One of the basic requirements for a scientific use of rain data from raingauges, ground and space radars is data quality control. Rain data could be used more intensively in many fields of activity (meteorology, hydrology, etc.), if the achievable data quality could be improved. This depends on the available data quality delivered by the measuring devices and the data quality enhancement procedures. To get an overview of the existing algorithms a literature review and literature pool have been produced. The diverse algorithms have been evaluated to meet VOLTAIRE objectives and sorted in different groups. To test the chosen algorithms an algorithm pool has been established, where the software is collected. A large part of this work presented here is implemented in the scope of the EU-project VOLTAIRE ( Validati on of mu ltisensors precipit ation fields and numerical modeling in Mediter ran ean test sites).

  8. Advanced Imaging Algorithms for Radiation Imaging Systems

    SciTech Connect

    Marleau, Peter

    2015-10-01

    The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.

  9. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  10. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  11. Genetic algorithms and supernovae type Ia analysis

    SciTech Connect

    Bogdanos, Charalampos; Nesseris, Savvas E-mail: nesseris@nbi.dk

    2009-05-15

    We introduce genetic algorithms as a means to analyze supernovae type Ia data and extract model-independent constraints on the evolution of the Dark Energy equation of state w(z) {identical_to} P{sub DE}/{rho}{sub DE}. Specifically, we will give a brief introduction to the genetic algorithms along with some simple examples to illustrate their advantages and finally we will apply them to the supernovae type Ia data. We find that genetic algorithms can lead to results in line with already established parametric and non-parametric reconstruction methods and could be used as a complementary way of treating SNIa data. As a non-parametric method, genetic algorithms provide a model-independent way to analyze data and can minimize bias due to premature choice of a dark energy model.

  12. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  13. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  14. Rigorous estimates for the relegation algorithm

    NASA Astrophysics Data System (ADS)

    Sansottera, Marco; Ceccaroni, Marta

    2016-07-01

    We revisit the relegation algorithm by Deprit et al. (Celest. Mech. Dyn. Astron. 79:157-182, 2001) in the light of the rigorous Nekhoroshev's like theory. This relatively recent algorithm is nowadays widely used for implementing closed form analytic perturbation theories, as it generalises the classical Birkhoff normalisation algorithm. The algorithm, here briefly explained by means of Lie transformations, has been so far introduced and used in a formal way, i.e. without providing any rigorous convergence or asymptotic estimates. The overall aim of this paper is to find such quantitative estimates and to show how the results about stability over exponentially long times can be recovered in a simple and effective way, at least in the non-resonant case.

  15. Non-Manhattan layout extraction algorithm

    NASA Astrophysics Data System (ADS)

    Satkhozhina, Aziza; Ahmadullin, Ildus; Allebach, Jan P.; Lin, Qian; Liu, Jerry; Tretter, Daniel; O'Brien-Strain, Eamonn; Hunter, Andrew

    2013-03-01

    Automated publishing requires large databases containing document page layout templates. The number of layout templates that need to be created and stored grows exponentially with the complexity of the document layouts. A better approach for automated publishing is to reuse layout templates of existing documents for the generation of new documents. In this paper, we present an algorithm for template extraction from a docu- ment page image. We use the cost-optimized segmentation algorithm (COS) to segment the image, and Voronoi decomposition to cluster the text regions. Then, we create a block image where each block represents a homo- geneous region of the document page. We construct a geometrical tree that describes the hierarchical structure of the document page. We also implement a font recognition algorithm to analyze the font of each text region. We present a detailed description of the algorithm and our preliminary results.

  16. Optimal configuration algorithm of a satellite transponder

    NASA Astrophysics Data System (ADS)

    Sukhodoev, M. S.; Savenko, I. I.; Martynov, Y. A.; Savina, N. I.; Asmolovskiy, V. V.

    2016-04-01

    This paper describes the algorithm of determining the optimal transponder configuration of the communication satellite while in service. This method uses a mathematical model of the pay load scheme based on the finite-state machine. The repeater scheme is shown as a weighted oriented graph that is represented as plexus in the program view. This paper considers an algorithm example for application with a typical transparent repeater scheme. In addition, the complexity of the current algorithm has been calculated. The main peculiarity of this algorithm is that it takes into account the functionality and state of devices, reserved equipment and input-output ports ranged in accordance with their priority. All described limitations allow a significant decrease in possible payload commutation variants and enable a satellite operator to make reconfiguration solutions operatively.

  17. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  18. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  19. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  20. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  1. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  2. A New Pivot Algorithm for Star Identification

    NASA Astrophysics Data System (ADS)

    Nah, Jakyoung; Yi, Yu; Kim, Yong Ha

    2014-09-01

    In this study, a star identification algorithm which utilizes pivot patterns instead of apparent magnitude information was developed. The new star identification algorithm consists of two steps of recognition process. In the first step, the brightest star in a sensor image is identified using the orientation of brightness between two stars as recognition information. In the second step, cell indexes are used as new recognition information to identify dimmer stars, which are derived from the brightest star already identified. If we use the cell index information, we can search over limited portion of the star catalogue database, which enables the faster identification of dimmer stars. The new pivot algorithm does not require calibrations on the apparent magnitude of a star but it shows robust characteristics on the errors of apparent magnitude compared to conventional pivot algorithms which require the apparent magnitude information.

  3. Hesitant fuzzy agglomerative hierarchical clustering algorithms

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolu; Xu, Zeshui

    2015-02-01

    Recently, hesitant fuzzy sets (HFSs) have been studied by many researchers as a powerful tool to describe and deal with uncertain data, but relatively, very few studies focus on the clustering analysis of HFSs. In this paper, we propose a novel hesitant fuzzy agglomerative hierarchical clustering algorithm for HFSs. The algorithm considers each of the given HFSs as a unique cluster in the first stage, and then compares each pair of the HFSs by utilising the weighted Hamming distance or the weighted Euclidean distance. The two clusters with smaller distance are jointed. The procedure is then repeated time and again until the desirable number of clusters is achieved. Moreover, we extend the algorithm to cluster the interval-valued hesitant fuzzy sets, and finally illustrate the effectiveness of our clustering algorithms by experimental results.

  4. Genetic algorithms for the vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Volna, Eva

    2016-06-01

    The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.

  5. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  6. Introduction to systolic algorithms and architectures

    SciTech Connect

    Bentley, J.L.; Kung, H.T.

    1983-01-01

    The authors survey the class of systolic special-purpose computer architectures and algorithms, which are particularly well-suited for implementation in very large scale integrated circuitry (VLSI). They give a brief introduction to systolic arrays for a reader with a broad technical background and some experience in using a computer, but who is not necessarily a computer scientist. In addition they briefly survey the technological advances in VLSI that led to the development of systolic algorithms and architectures. 38 references.

  7. Adaptive sensor fusion using genetic algorithms

    SciTech Connect

    Fitzgerald, D.S.; Adams, D.G.

    1994-08-01

    Past attempts at sensor fusion have used some form of Boolean logic to combine the sensor information. As an alteniative, an adaptive ``fuzzy`` sensor fusion technique is described in this paper. This technique exploits the robust capabilities of fuzzy logic in the decision process as well as the optimization features of the genetic algorithm. This paper presents a brief background on fuzzy logic and genetic algorithms and how they are used in an online implementation of adaptive sensor fusion.

  8. Facial Composite System Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zahradníková, Barbora; Duchovičová, Soňa; Schreiber, Peter

    2014-12-01

    The article deals with genetic algorithms and their application in face identification. The purpose of the research is to develop a free and open-source facial composite system using evolutionary algorithms, primarily processes of selection and breeding. The initial testing proved higher quality of the final composites and massive reduction in the composites processing time. System requirements were specified and future research orientation was proposed in order to improve the results.

  9. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  10. Parallelization of the Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.

  11. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases.

  12. Grover's algorithm and the secant varieties

    NASA Astrophysics Data System (ADS)

    Holweck, Frédéric; Jaffali, Hamza; Nounouh, Ismaël

    2016-09-01

    In this paper we investigate the entanglement nature of quantum states generated by Grover's search algorithm by means of algebraic geometry. More precisely we establish a link between entanglement of states generated by the algorithm and auxiliary algebraic varieties built from the set of separable states. This new perspective enables us to propose qualitative interpretations of earlier numerical results obtained by M. Rossi et al. We also illustrate our purpose with a couple of examples investigated in details.

  13. Petaflops Computing: The Key Algorithmic Challenges

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The prospect of petaflops-class computers brings to the fore some important algorithmic issues that have been considered in the high performance computing community for several years. Key among them are (1) concurrency (whether the fundamental concurrency of an algorithm is sufficient to keep thousands of processors productively busy); (2) data locality; (3) latency tolerance; and (4) memory and operation count scaling. This introductory presentation will give an overview of these issues.

  14. Spectral Representations of Uncertainty: Algorithms and Applications

    SciTech Connect

    George Em Karniadakis

    2005-04-24

    The objectives of this project were: (1) Develop a general algorithmic framework for stochastic ordinary and partial differential equations. (2) Set polynomial chaos method and its generalization on firm theoretical ground. (3) Quantify uncertainty in large-scale simulations involving CFD, MHD and microflows. The overall goal of this project was to provide DOE with an algorithmic capability that is more accurate and three to five orders of magnitude more efficient than the Monte Carlo simulation.

  15. Exponential integration algorithms applied to viscoplasticity

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.

  16. Navigation Algorithms for Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Huxel, Paul J.; Bishop, Robert H.

    2004-01-01

    The objective of the investigations is to develop navigation algorithms to support formation flying missions. In particular, we examine the advantages and concerns associated with the use of combinations of inertial and relative measurements, as well as address observability issues. In our analysis we consider the interaction between measurement types, update frequencies, and trajectory geometry and their cumulative impact on observability. Furthermore, we investigate how relative measurements affect inertial navigation in terms of algorithm performance.

  17. Algorithm for in-flight gyroscope calibration

    NASA Technical Reports Server (NTRS)

    Davenport, P. B.; Welter, G. L.

    1988-01-01

    An optimal algorithm for the in-flight calibration of spacecraft gyroscope systems is presented. Special consideration is given to the selection of the loss function weight matrix in situations in which the spacecraft attitude sensors provide significantly more accurate information in pitch and yaw than in roll, such as will be the case in the Hubble Space Telescope mission. The results of numerical tests that verify the accuracy of the algorithm are discussed.

  18. Simulated annealing algorithm for optimal capital growth

    NASA Astrophysics Data System (ADS)

    Luo, Yong; Zhu, Bo; Tang, Yong

    2014-08-01

    We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.

  19. Self-organization and clustering algorithms

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.

    1991-01-01

    Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.

  20. Supercomputers and biological sequence comparison algorithms.

    PubMed

    Core, N G; Edmiston, E W; Saltz, J H; Smith, R M

    1989-12-01

    Comparison of biological (DNA or protein) sequences provides insight into molecular structure, function, and homology and is increasingly important as the available databases become larger and more numerous. One method of increasing the speed of the calculations is to perform them in parallel. We present the results of initial investigations using two dynamic programming algorithms on the Intel iPSC hypercube and the Connection Machine as well as an inexpensive, heuristically-based algorithm on the Encore Multimax.

  1. Large space structures control algorithm characterization

    NASA Technical Reports Server (NTRS)

    Fogel, E.

    1983-01-01

    Feedback control algorithms are developed for sensor/actuator pairs on large space systems. These algorithms have been sized in terms of (1) floating point operation (FLOP) demands; (2) storage for variables; and (3) input/output data flow. FLOP sizing (per control cycle) was done as a function of the number of control states and the number of sensor/actuator pairs. Storage for variables and I/O sizing was done for specific structure examples.

  2. On mesh rezoning algorithms for parallel platforms

    SciTech Connect

    Plaskacz, E.J.

    1995-07-01

    A mesh rezoning algorithm for finite element simulations in a parallel-distributed environment is described. The cornerstones of the algorithm are: the parallel computation of distortion norms on the element and subdomain level, the exchange of the individual subdomain norms to form a subdomain distortion vector, the classification of subdomains and the rezoning behavior prescribed within each subdomain as a response to its own classification and the classification of neighboring subdomains.

  3. Intelligent perturbation algorithms to space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    The limited availability and high cost of crew time and scarce resources make optimization of space operations critical. Advances in computer technology coupled with new iterative search techniques permit the near optimization of complex scheduling problems that were previously considered computationally intractable. Described here is a class of search techniques called Intelligent Perturbation Algorithms. Several scheduling systems which use these algorithms to optimize the scheduling of space crew, payload, and resource operations are also discussed.

  4. A hierarchical exact accelerated stochastic simulation algorithm

    PubMed Central

    Orendorff, David; Mjolsness, Eric

    2012-01-01

    A new algorithm, “HiER-leap” (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled “blocks” and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms. PMID:23231214

  5. New algorithms for the minimal form'' problem

    SciTech Connect

    Oliveira, J.S.; Cook, G.O. Jr. ); Purtill, M.R. . Center for Communications Research)

    1991-12-20

    It is widely appreciated that large-scale algebraic computation (performing computer algebra operations on large symbolic expressions) places very significant demands upon existing computer algebra systems. Because of this, parallel versions of many important algorithms have been successfully sought, and clever techniques have been found for improving the speed of the algebraic simplification process. In addition, some attention has been given to the issue of restructuring large expressions, or transforming them into minimal forms.'' By minimal form,'' we mean that form of an expression that involves a minimum number of operations in the sense that no simple transformation on the expression leads to a form involving fewer operations. Unfortunately, the progress that has been achieved to date on this very hard problem is not adequate for the very significant demands of large computer algebra problems. In response to this situation, we have developed some efficient algorithms for constructing minimal forms.'' In this paper, the multi-stage algorithm in which these new algorithms operate is defined and the features of these algorithms are developed. In a companion paper, we introduce the core algebra engine of a new tool that provides the algebraic framework required for the implementation of these new algorithms.

  6. The Applications of Genetic Algorithms in Medicine.

    PubMed

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-11-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.]. PMID:26676060

  7. TIRS stray light correction: algorithms and performance

    NASA Astrophysics Data System (ADS)

    Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki

    2015-09-01

    The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.

  8. OpenAD : algorithm implementation user guide.

    SciTech Connect

    Utke, J.

    2004-05-13

    Research in automatic differentiation has led to a number of tools that implement various approaches and algorithms for the most important programming languages. While all these tools have the same mathematical underpinnings, the actual implementations have little in common and mostly are specialized for a particular programming language, compiler internal representation, or purpose. This specialization does not promote an open test bed for experimentation with new algorithms that arise from exploiting structural properties of numerical codes in a source transformation context. OpenAD is being designed to fill this need by providing a framework that allows for relative ease in the implementation of algorithms that operate on a representation of the numerical kernel of a program. Language independence is achieved by using an intermediate XML format and the abstraction of common compiler analyses in Open-Analysis. The intermediate format is mapped to concrete programming languages via two front/back end combinations. The design allows for reuse and combination of already implemented algorithms. We describe the set of algorithms and basic functionality currently implemented in OpenAD and explain the necessary steps to add a new algorithm to the framework.

  9. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  10. The Applications of Genetic Algorithms in Medicine

    PubMed Central

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-01-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.] PMID:26676060

  11. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  12. Neural algorithms on VLSI concurrent architectures

    SciTech Connect

    Caviglia, D.D.; Bisio, G.M.; Parodi, G.

    1988-09-01

    The research concerns the study of neural algorithms for developing CAD tools with A.I. features in VLSI design activities. In this paper the focus is on optimization problems such as partitioning, placement and routing. These problems require massive computational power to be solved (NP-complete problems) and the standard approach is usually based on euristic techniques. Neural algorithms can be represented by a circuital model. This kind of representation can be easily mapped in a real circuit, which, however, features limited flexibility with respect to the variety of problems. In this sense the simulation of the neural circuit, by mapping it on a digital VLSI concurrent architecture seems to be preferrable; in addition this solution offers a wider choice with regard to algorithms characteristics (e.g. transfer curve of neural elements, reconfigurability of interconnections, etc.). The implementation with programmable components, such as transputers, allows an indirect mapping of the algorithm (one transputer for N neurons) accordingly to the dimension and the characteristics of the problem. In this way the neural algorithm described by the circuit is reduced to the algorithm that simulates the network behavior. The convergence properties of that formulation are studied with respect to the characteristics of the neural element transfer curve.

  13. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    NASA Technical Reports Server (NTRS)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  14. Algorithms for Discovery of Multiple Markov Boundaries

    PubMed Central

    Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F.

    2013-01-01

    Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multiple Markov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains. PMID:25285052

  15. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  16. TrackEye tracking algorithm characterization

    NASA Astrophysics Data System (ADS)

    Valley, Michael T.; Shields, Robert W.; Reed, Jack M.

    2004-10-01

    TrackEye is a film digitization and target tracking system that offers the potential for quantitatively measuring the dynamic state variables (e.g., absolute and relative position, orientation, linear and angular velocity/acceleration, spin rate, trajectory, angle of attack, etc.) for moving objects using captured single or dual view image sequences. At the heart of the system is a set of tracking algorithms that automatically find and quantify the location of user selected image details such as natural test article features or passive fiducials that have been applied to cooperative test articles. This image position data is converted into real world coordinates and rates with user specified information such as the image scale and frame rate. Though tracking methods such as correlation algorithms are typically robust by nature, the accuracy and suitability of each TrackEye tracking algorithm is in general unknown even under good imaging conditions. The challenges of optimal algorithm selection and algorithm performance/measurement uncertainty are even more significant for long range tracking of high-speed targets where temporally varying atmospheric effects degrade the imagery. This paper will present the preliminary results from a controlled test sequence used to characterize the performance of the TrackEye tracking algorithm suite.

  17. Information filtering via weighted heat conduction algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  18. Effective Memetic Algorithms for VLSI design = Genetic Algorithms + local search + multi-level clustering.

    PubMed

    Areibi, Shawki; Yang, Zhen

    2004-01-01

    Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem. PMID:15355604

  19. Effective Memetic Algorithms for VLSI design = Genetic Algorithms + local search + multi-level clustering.

    PubMed

    Areibi, Shawki; Yang, Zhen

    2004-01-01

    Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem.

  20. Drainage Algorithm for Geospatial Knowledge

    SciTech Connect

    2006-08-15

    The Pacific Northwest National Laboratory (PNNL) has developed a prototype stream extraction algorithm that semi-automatically extracts and characterizes streams using a variety of multisensor imagery and digital terrain elevation data (DTED) data. The system is currently optimized for three types of single-band imagery: radar, visible, and thermal. Method of Solution: DRAGON: (1) classifies pixels into clumps of water objects based on the classification of water pixels by spectral signatures and neighborhood relationships, (2) uses the morphology operations (erosion and dilation) to separate out large lakes (or embayment), isolated lakes, ponds, wide rivers and narrow rivers, and (3) translates the river objects into vector objects. In detail, the process can be broken down into the following steps. A. Water pixels are initially identified using on the extend range and slope values (if an optional DEM file is available). B. Erode to the distance that defines a large water body and then dilate back. The resulting mask can be used to identify large lake and embayment objects that are then removed from the image. Since this operation be time consuming it is only performed if a simple test (i.e. a large box can be found somewhere in the image that contains only water pixels) that indicates a large water body is present. C. All water pixels are ‘clumped’ (in Imagine terminology clumping is when pixels of a common classification that touch are connected) and clumps which do not contain pure water pixels (e.g. dark cloud shadows) are removed D. The resulting true water pixels are clumped and water objects which are too small (e.g. ponds) or isolated lakes (i.e. isolated objects with a small compactness ratio) are removed. Note that at this point lakes have been identified has a byproduct of the filtering process and can be output has vector layers if needed. E. At this point only river pixels are left