Science.gov

Sample records for acuros xb algorithm

  1. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-03-01

    A new algorithm, Acuros® XB Advanced Dose Calculation, has been introduced by Varian Medical Systems in the Eclipse planning system for photon dose calculation in external radiotherapy. Acuros XB is based on the solution of the linear Boltzmann transport equation (LBTE). The LBTE describes the macroscopic behaviour of radiation particles as they travel through and interact with matter. The implementation of Acuros XB in Eclipse has not been assessed; therefore, it is necessary to perform these pre-clinical validation tests to determine its accuracy. This paper summarizes the results of comparisons of Acuros XB calculations against measurements and calculations performed with a previously validated dose calculation algorithm, the Anisotropic Analytical Algorithm (AAA). The tasks addressed in this paper are limited to the fundamental characterization of Acuros XB in water for simple geometries. Validation was carried out for four different beams: 6 and 15 MV beams from a Varian Clinac 2100 iX, and 6 and 10 MV 'flattening filter free' (FFF) beams from a TrueBeam linear accelerator. The TrueBeam FFF are new beams recently introduced in clinical practice on general purpose linear accelerators and have not been previously reported on. Results indicate that Acuros XB accurately reproduces measured and calculated (with AAA) data and only small deviations were observed for all the investigated quantities. In general, the overall degree of accuracy for Acuros XB in simple geometries can be stated to be within 1% for open beams and within 2% for mechanical wedges. The basic validation of the Acuros XB algorithm was therefore considered satisfactory for both conventional photon beams as well as for FFF beams of new generation linacs such as the Varian TrueBeam.

  2. A phantom study on the behavior of Acuros XB algorithm in flattening filter free photon beams

    PubMed Central

    Muralidhar, K. R.; Pangam, Suresh; Srinivas, P.; Athar Ali, Mirza; Priya, V. Sujana; Komanduri, Krishna

    2015-01-01

    To study the behavior of Acuros XB algorithm for flattening filter free (FFF) photon beams in comparison with the anisotropic analytical algorithm (AAA) when applied to homogeneous and heterogeneous phantoms in conventional and RapidArc techniques. Acuros XB (Eclipse version 10.0, Varian Medical Systems, CA, USA) and AAA algorithms were used to calculate dose distributions for both 6X FFF and 10X FFF energies. RapidArc plans were created on Catphan phantom 504 and conventional plans on virtual homogeneous water phantom 30 × 30 × 30 cm3, virtual heterogeneous phantom with various inserts and on solid water phantom with air cavity. Dose at various inserts with different densities were measured in both AAA and Acuros algorithms. The maximum % variation in dose was observed in (−944 HU) air insert and minimum in (85 HU) acrylic insert in both 6X FFF and 10X FFF photons. Less than 1% variation observed between −149 HU and 282 HU for both energies. At −40 HU and 765 HU Acuros behaved quite contrarily with 10X FFF. Maximum % variation in dose was observed in less HU values and minimum variation in higher HU values for both FFF energies. Global maximum dose observed at higher depths for Acuros for both energies compared with AAA. Increase in dose was observed with Acuros algorithm in almost all densities and decrease at few densities ranging from 282 to 643 HU values. Field size, depth, beam energy, and material density influenced the dose difference between two algorithms. PMID:26500400

  3. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-05-01

    This corrigendum intends to clarify some important points that were not clearly or properly addressed in the original paper, and for which the authors apologize. The original description of the first Acuros algorithm is from the developers, published in Physics in Medicine and Biology by Vassiliev et al (2010) in the paper entitled 'Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams'. The main equations describing the algorithm reported in our paper, implemented as the 'Acuros XB Advanced Dose Calculation Algorithm' in the Varian Eclipse treatment planning system, were originally described (for the original Acuros algorithm) in the above mentioned paper by Vassiliev et al. The intention of our description in our paper was to give readers an overview of the algorithm, not pretending to have authorship of the algorithm itself (used as implemented in the planning system). Unfortunately our paper was not clear, particularly in not allocating full credit to the work published by Vassiliev et al on the original Acuros algorithm. Moreover, it is important to clarify that we have not adapted any existing algorithm, but have used the Acuros XB implementation in the Eclipse planning system from Varian. In particular, the original text of our paper should have been as follows: On page 1880 the sentence 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008, 2010). Acuros XB builds upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were adapted especially for external photon beam dose calculations' should be corrected to 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008). A new algorithm called Acuros, developed by the Transpire Inc. group, was

  4. Effect of Acuros XB algorithm on monitor units for stereotactic body radiotherapy planning of lung cancer

    SciTech Connect

    Khan, Rao F. Villarreal-Barajas, Eduardo; Lau, Harold; Liu, Hong-Wei

    2014-04-01

    Stereotactic body radiotherapy (SBRT) is a curative regimen that uses hypofractionated radiation-absorbed dose to achieve a high degree of local control in early stage non–small cell lung cancer (NSCLC). In the presence of heterogeneities, the dose calculation for the lungs becomes challenging. We have evaluated the dosimetric effect of the recently introduced advanced dose-calculation algorithm, Acuros XB (AXB), for SBRT of NSCLC. A total of 97 patients with early-stage lung cancer who underwent SBRT at our cancer center during last 4 years were included. Initial clinical plans were created in Aria Eclipse version 8.9 or prior, using 6 to 10 fields with 6-MV beams, and dose was calculated using the anisotropic analytic algorithm (AAA) as implemented in Eclipse treatment planning system. The clinical plans were recalculated in Aria Eclipse 11.0.21 using both AAA and AXB algorithms. Both sets of plans were normalized to the same prescription point at the center of mass of the target. A secondary monitor unit (MU) calculation was performed using commercial program RadCalc for all of the fields. For the planning target volumes ranging from 19 to 375 cm{sup 3}, a comparison of MUs was performed for both set of algorithms on field and plan basis. In total, variation of MUs for 677 treatment fields was investigated in terms of equivalent depth and the equivalent square of the field. Overall, MUs required by AXB to deliver the prescribed dose are on an average 2% higher than AAA. Using a 2-tailed paired t-test, the MUs from the 2 algorithms were found to be significantly different (p < 0.001). The secondary independent MU calculator RadCalc underestimates the required MUs (on an average by 4% to 5%) in the lung relative to either of the 2 dose algorithms.

  5. Dosimetric impact of Acuros XB deterministic radiation transport algorithm for heterogeneous dose calculation in lung cancer

    SciTech Connect

    Han Tao; Followill, David; Repchak, Roman; Molineu, Andrea; Howell, Rebecca; Salehpour, Mohammad; Mikell, Justin; Mourtada, Firas

    2013-05-15

    Purpose: The novel deterministic radiation transport algorithm, Acuros XB (AXB), has shown great potential for accurate heterogeneous dose calculation. However, the clinical impact between AXB and other currently used algorithms still needs to be elucidated for translation between these algorithms. The purpose of this study was to investigate the impact of AXB for heterogeneous dose calculation in lung cancer for intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The thorax phantom from the Radiological Physics Center (RPC) was used for this study. IMRT and VMAT plans were created for the phantom in the Eclipse 11.0 treatment planning system. Each plan was delivered to the phantom three times using a Varian Clinac iX linear accelerator to ensure reproducibility. Thermoluminescent dosimeters (TLDs) and Gafchromic EBT2 film were placed inside the phantom to measure delivered doses. The measurements were compared with dose calculations from AXB 11.0.21 and the anisotropic analytical algorithm (AAA) 11.0.21. Two dose reporting modes of AXB, dose-to-medium in medium (D{sub m,m}) and dose-to-water in medium (D{sub w,m}), were studied. Point doses, dose profiles, and gamma analysis were used to quantify the agreement between measurements and calculations from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: For the RPC lung phantom, AAA and AXB dose predictions were found in good agreement to TLD and film measurements for both IMRT and VMAT plans. TLD dose predictions were within 0.4%-4.4% to AXB doses (both D{sub m,m} and D{sub w,m}); and within 2.5%-6.4% to AAA doses, respectively. For the film comparisons, the gamma indexes ({+-}3%/3 mm criteria) were 94%, 97%, and 98% for AAA, AXB{sub Dm,m}, and AXB{sub Dw,m}, respectively. The differences between AXB and AAA in dose-volume histogram mean doses were within 2% in the planning target volume, lung, heart, and within 5% in the spinal cord

  6. SU-E-T-313: The Accuracy of the Acuros XB Advanced Dose Calculation Algorithm for IMRT Dose Distributions in Head and Neck

    SciTech Connect

    Araki, F; Onizuka, R; Ohno, T; Tomiyama, Y; Hioki, K

    2014-06-01

    Purpose: To investigate the accuracy of the Acuros XB version 11 (AXB11) advanced dose calculation algorithm by comparing with Monte Caro (MC) calculations. The comparisons were performed with dose distributions for a virtual inhomogeneity phantom and intensity-modulated radiotherapy (IMRT) in head and neck. Methods: Recently, AXB based on Linear Boltzmann Transport Equation has been installed in the Eclipse treatment planning system (Varian Medical Oncology System, USA). The dose calculation accuracy of AXB11 was tested by the EGSnrc-MC calculations. In additions, AXB version 10 (AXB10) and Analytical Anisotropic Algorithm (AAA) were also used. First the accuracy of an inhomogeneity correction for AXB and AAA algorithms was evaluated by comparing with MC-calculated dose distributions for a virtual inhomogeneity phantom that includes water, bone, air, adipose, muscle, and aluminum. Next the IMRT dose distributions for head and neck were compared with the AXB and AAA algorithms and MC by means of dose volume histograms and three dimensional gamma analysis for each structure (CTV, OAR, etc.). Results: For dose distributions with the virtual inhomogeneity phantom, AXB was in good agreement with those of MC, except the dose in air region. The dose in air region decreased in order of MCalgorithms, ie: 0.700 MeV for MC, 0.711 MeV for AXB11, and 1.011 MeV for AXB 10. Since the AAA algorithm is based on the dose kernel of water, the doses in regions for air, bone, and aluminum considerably became higher than those of AXB and MC. The pass rates of the gamma analysis for IMRT dose distributions in head and neck were similar to those of MC in order of AXB11

  7. SU-E-T-67: Clinical Implementation and Evaluation of the Acuros Dose Calculation Algorithm

    SciTech Connect

    Yan, C; Combine, T; Dickens, K; Wynn, R; Pavord, D; Huq, M

    2014-06-01

    Purpose: The main aim of the current study is to present a detailed description of the implementation of the Acuros XB Dose Calculation Algorithm, and subsequently evaluate its clinical impacts by comparing it with AAA algorithm. Methods: The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were evaluated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6cm × 6cm to 40cm × 40cm. Central axis and off-axis points with different depths were chosen for the comparison. Similarly, wedge fields with wedge angles from 15 to 60 degree were used. In addition, variable field sizes for a heterogeneous phantom were used to evaluate the Acuros algorithm. Finally, both Acuros and AAA were tested on VMAT patient plans for various sites. Does distributions and calculation time were compared. Results: On average, computation time is reduced by at least 50% by Acuros XB compared with AAA on single fields and VMAT plans. When used for open 6MV photon beams on homogeneous water phantom, both Acuros XB and AAA calculated doses were within 1% of measurement. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. When heterogeneous phantom was used, Acuros XB also improved on accuracy. Conclusion: Compared with AAA, Acuros XB can improve accuracy while significantly reduce computation time for VMAT plans.

  8. Dosimetric Impact of Using the Acuros XB Algorithm for Intensity Modulated Radiation Therapy and RapidArc Planning in Nasopharyngeal Carcinomas

    SciTech Connect

    Kan, Monica W.K.; Leung, Lucullus H.T.; Yu, Peter K.N.

    2013-01-01

    Purpose: To assess the dosimetric implications for the intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy with RapidArc (RA) of nasopharyngeal carcinomas (NPC) due to the use of the Acuros XB (AXB) algorithm versus the anisotropic analytical algorithm (AAA). Methods and Materials: Nine-field sliding window IMRT and triple-arc RA plans produced for 12 patients with NPC using AAA were recalculated using AXB. The dose distributions to multiple planning target volumes (PTVs) with different prescribed doses and critical organs were compared. The PTVs were separated into components in bone, air, and tissue. The change of doses by AXB due to air and bone, and the variation of the amount of dose changes with number of fields was also studied using simple geometric phantoms. Results: Using AXB instead of AAA, the averaged mean dose to PTV{sub 70} (70 Gy was prescribed to PTV{sub 70}) was found to be 0.9% and 1.2% lower for IMRT and RA, respectively. It was approximately 1% lower in tissue, 2% lower in bone, and 1% higher in air. The averaged minimum dose to PTV{sub 70} in bone was approximately 4% lower for both IMRT and RA, whereas it was approximately 1.5% lower for PTV{sub 70} in tissue. The decrease in target doses estimated by AXB was mostly contributed from the presence of bone, less from tissue, and none from air. A similar trend was observed for PTV{sub 60} (60 Gy was prescribed to PTV{sub 60}). The doses to most serial organs were found to be 1% to 3% lower and to other organs 4% to 10% lower for both techniques. Conclusions: The use of the AXB algorithm is highly recommended for IMRT and RapidArc planning for NPC cases.

  9. Experimental validation of deterministic Acuros XB algorithm for IMRT and VMAT dose calculations with the Radiological Physics Center's head and neck phantom

    SciTech Connect

    Han Tao; Mourtada, Firas; Kisling, Kelly; Mikell, Justin; Followill, David; Howell, Rebecca

    2012-04-15

    Purpose: The purpose of this study was to verify the dosimetric performance of Acuros XB (AXB), a grid-based Boltzmann solver, in intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The Radiological Physics Center (RPC) head and neck (H and N) phantom was used for all calculations and measurements in this study. Clinically equivalent IMRT and VMAT plans were created on the RPC H and N phantom in the Eclipse treatment planning system (version 10.0) by using RPC dose prescription specifications. The dose distributions were calculated with two different algorithms, AXB 11.0.03 and anisotropic analytical algorithm (AAA) 10.0.24. Two dose report modes of AXB were recorded: dose-to-medium in medium (D{sub m,m}) and dose-to-water in medium (D{sub w,m}). Each treatment plan was delivered to the RPC phantom three times for reproducibility by using a Varian Clinac iX linear accelerator. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic registered EBT2 film, respectively. Profile comparison and 2D gamma analysis were used to quantify the agreement between the film measurements and the calculated dose distributions from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: Good agreement was observed between measured doses and those calculated with AAA or AXB. Both AAA and AXB calculated doses within 5% of TLD measurements in both the IMRT and VMAT plans. Results of AXB{sub Dm,m} (0.1% to 3.6%) were slightly better than AAA (0.2% to 4.6%) or AXB{sub Dw,m} (0.3% to 5.1%). The gamma analysis for both AAA and AXB met the RPC 7%/4 mm criteria (over 90% passed), whereas AXB{sub Dm,m} met 5%/3 mm criteria in most cases. AAA was 2 to 3 times faster than AXB for IMRT, whereas AXB was 4-6 times faster than AAA for VMAT. Conclusions: AXB was found to be satisfactorily accurate when compared to measurements in the RPC H and N phantom. Compared with AAA

  10. SU-E-T-481: Dosimetric Comparison of Acuros XB and Anisotropic Analytic Algorithm with Commercial Monte Carlo Based Dose Calculation Algorithm for Stereotactic Body Radiation Therapy of Lung Cancer

    SciTech Connect

    Cao, M; Tenn, S; Lee, C; Yang, Y; Lamb, J; Agazaryan, N; Lee, P; Low, D

    2014-06-01

    Purpose: To evaluate performance of three commercially available treatment planning systems for stereotactic body radiation therapy (SBRT) of lung cancer using the following algorithms: Boltzmann transport equation based algorithm (AcurosXB AXB), convolution based algorithm Anisotropic Analytic Algorithm (AAA); and Monte Carlo based algorithm (XVMC). Methods: A total of 10 patients with early stage non-small cell peripheral lung cancer were included. The initial clinical plans were generated using the XVMC based treatment planning system with a prescription of 54Gy in 3 fractions following RTOG0613 protocol. The plans were recalculated with the same beam parameters and monitor units using AAA and AXB algorithms. A calculation grid size of 2mm was used for all algorithms. The dose distribution, conformity, and dosimetric parameters for the targets and organs at risk (OAR) are compared between the algorithms. Results: The average PTV volume was 19.6mL (range 4.2–47.2mL). The volume of PTV covered by the prescribed dose (PTV-V100) were 93.97±2.00%, 95.07±2.07% and 95.10±2.97% for XVMC, AXB and AAA algorithms, respectively. There was no significant difference in high dose conformity index; however, XVMC predicted slightly higher values (p=0.04) for the ratio of 50% prescription isodose volume to PTV (R50%). The percentage volume of total lungs receiving dose >20Gy (LungV20Gy) were 4.03±2.26%, 3.86±2.22% and 3.85±2.21% for XVMC, AXB and AAA algorithms. Examination of dose volume histograms (DVH) revealed small differences in targets and OARs for most patients. However, the AAA algorithm was found to predict considerable higher PTV coverage compared with AXB and XVMC algorithms in two cases. The dose difference was found to be primarily located at the periphery region of the target. Conclusion: For clinical SBRT lung treatment planning, the dosimetric differences between three commercially available algorithms are generally small except at target periphery. XVMC

  11. From AAA to Acuros XB-clinical implications of selecting either Acuros XB dose-to-water or dose-to-medium.

    PubMed

    Zifodya, Jackson M; Challens, Cameron H C; Hsieh, Wen-Long

    2016-06-01

    When implementing Acuros XB (AXB) as a substitute for anisotropic analytic algorithm (AAA) in the Eclipse Treatment Planning System, one is faced with a dilemma of reporting either dose to medium, AXB-Dm or dose to water, AXB-Dw. To assist with decision making on selecting either AXB-Dm or AXB-Dw for dose reporting, a retrospective study of treated patients for head & neck (H&N), prostate, breast and lung is presented. Ten patients, previously treated using AAA plans, were selected for each site and re-planned with AXB-Dm and AXB-Dw. Re-planning was done with fixed monitor units (MU) as well as non-fixed MUs. Dose volume histograms (DVH) of targets and organs at risk (OAR), were analyzed in conjunction with ICRU-83 recommended dose reporting metrics. Additionally, comparisons of plan homogeneity indices (HI) and MUs were done to further highlight the differences between the algorithms. Results showed that, on average AAA overestimated dose to the target volume and OARs by less than 2.0 %. Comparisons between AXB-Dw and AXB-Dm, for all sites, also showed overall dose differences to be small (<1.5 %). However, in non-water biological media, dose differences between AXB-Dw and AXB-Dm, as large as 4.6 % were observed. AXB-Dw also tended to have unexpectedly high 3D maximum dose values (>135 % of prescription dose) for target volumes with high density materials. Homogeneity indices showed that AAA planning and optimization templates would need to be adjusted only for the H&N and Lung sites. MU comparison showed insignificant differences between AXB-Dw relative to AAA and between AXB-Dw relative to AXB-Dm. However AXB-Dm MUs relative to AAA, showed an average difference of about 1.3 % signifying an underdosage by AAA. In conclusion, when dose is reported as AXB-Dw, the effect that high density structures in the PTV has on the dose distribution should be carefully considered. As the results show overall small dose differences between the algorithms, when

  12. Dosimetric comparison of Acuros XB, AAA, and XVMC in stereotactic body radiotherapy for lung cancer

    SciTech Connect

    Tsuruta, Yusuke; Nakata, Manabu; Higashimura, Kyoji; Nakamura, Mitsuhiro Matsuo, Yukinori; Monzen, Hajime; Mizowaki, Takashi; Hiraoka, Masahiro

    2014-08-15

    Purpose: To compare the dosimetric performance of Acuros XB (AXB), anisotropic analytical algorithm (AAA), and x-ray voxel Monte Carlo (XVMC) in heterogeneous phantoms and lung stereotactic body radiotherapy (SBRT) plans. Methods: Water- and lung-equivalent phantoms were combined to evaluate the percentage depth dose and dose profile. The radiation treatment machine Novalis (BrainLab AG, Feldkirchen, Germany) with an x-ray beam energy of 6 MV was used to calculate the doses in the composite phantom at a source-to-surface distance of 100 cm with a gantry angle of 0°. Subsequently, the clinical lung SBRT plans for the 26 consecutive patients were transferred from the iPlan (ver. 4.1; BrainLab AG) to the Eclipse treatment planning systems (ver. 11.0.3; Varian Medical Systems, Palo Alto, CA). The doses were then recalculated with AXB and AAA while maintaining the XVMC-calculated monitor units and beam arrangement. Then the dose-volumetric data obtained using the three different radiation dose calculation algorithms were compared. Results: The results from AXB and XVMC agreed with measurements within ±3.0% for the lung-equivalent phantom with a 6 × 6 cm{sup 2} field size, whereas AAA values were higher than measurements in the heterogeneous zone and near the boundary, with the greatest difference being 4.1%. AXB and XVMC agreed well with measurements in terms of the profile shape at the boundary of the heterogeneous zone. For the lung SBRT plans, AXB yielded lower values than XVMC in terms of the maximum doses of ITV and PTV; however, the differences were within ±3.0%. In addition to the dose-volumetric data, the dose distribution analysis showed that AXB yielded dose distribution calculations that were closer to those with XVMC than did AAA. Means ± standard deviation of the computation time was 221.6 ± 53.1 s (range, 124–358 s), 66.1 ± 16.0 s (range, 42–94 s), and 6.7 ± 1.1 s (range, 5–9 s) for XVMC, AXB, and AAA, respectively. Conclusions: In the

  13. SU-E-T-137: Dosimetric Validation for Pinnacle, Acuros, AAA, and Brainlab Algorithms with Induced Inhomogenieties

    SciTech Connect

    Lopez, P; Tambasco, M; LaFontaine, R; Burns, L

    2014-06-01

    Purpose: To compare the dosimetric accuracy of the Eclipse 11.0 Acuros XB and Anisotropic Analytical Algorithm (AAA), Pinnacle-3 9.2 Collapsed Cone Convolution, and the iPlan 4.1 Monte Carlo (MC) and Pencil Beam (PB) algorithms using measurement as the gold standard. Methods: Ion chamber and diode measurements were taken for 6, 10, and 18 MV beams in a phantom made up of slab densities corresponding to solid water, lung, and bone. The phantom was setup at source-to-surface distance of 100 cm, and the field sizes were 3.0 × 3.0, 5.0 × 5.0, and 10.0 × 10.0 cm2. Data from the planning systems were computed along the central axis of the beam. The measurements were taken using a pinpoint chamber and edge diode for interface regions. Results: The best agreement between data from the algorithms and our measurements occurs away from the slab interfaces. For the 6 MV beam, iPlan 4.1 MC software performs the best with 1.7% absolute average percent difference from measurement. For the 10 MV beam, iPlan 4.1 PB performs the best with 2.7% absolute average percent difference from measurement. For the 18 MV beam, Acuros performs the best with 2.0% absolute average percent difference from measurement. It is interesting to note that the steepest drop in dose occurred the at lung heterogeneity-solid water interface of the18 MV, 3.0 × 3.0 cm2 field size setup. In this situation, Acuros and AAA performed best with an average percent difference within −1.1% of measurement, followed by iPlan 4.1 MC, which was within 4.9%. Conclusion: This study shows that all of the algorithms perform reasonably well in computing dose in a heterogeneous slab phantom. Moreover, Acuros and AAA perform particularly well at the lung-solid water interfaces for higher energy beams and small field sizes.

  14. Experimental verification of the Acuros XB and AAA dose calculation adjacent to heterogeneous media for IMRT and RapidArc of nasopharygeal carcinoma

    SciTech Connect

    Kan, Monica W. K.; Leung, Lucullus H. T.; So, Ronald W. K.; Yu, Peter K. N.

    2013-03-15

    Purpose: To compare the doses calculated by the Acuros XB (AXB) algorithm and analytical anisotropic algorithm (AAA) with experimentally measured data adjacent to and within heterogeneous medium using intensity modulated radiation therapy (IMRT) and RapidArc{sup Registered-Sign} (RA) volumetric arc therapy plans for nasopharygeal carcinoma (NPC). Methods: Two-dimensional dose distribution immediately adjacent to both air and bone inserts of a rectangular tissue equivalent phantom irradiated using IMRT and RA plans for NPC cases were measured with GafChromic{sup Registered-Sign} EBT3 films. Doses near and within the nasopharygeal (NP) region of an anthropomorphic phantom containing heterogeneous medium were also measured with thermoluminescent dosimeters (TLD) and EBT3 films. The measured data were then compared with the data calculated by AAA and AXB. For AXB, dose calculations were performed using both dose-to-medium (AXB{sub Dm}) and dose-to-water (AXB{sub Dw}) options. Furthermore, target dose differences between AAA and AXB were analyzed for the corresponding real patients. The comparison of real patient plans was performed by stratifying the targets into components of different densities, including tissue, bone, and air. Results: For the verification of planar dose distribution adjacent to air and bone using the rectangular phantom, the percentages of pixels that passed the gamma analysis with the {+-} 3%/3mm criteria were 98.7%, 99.5%, and 97.7% on the axial plane for AAA, AXB{sub Dm}, and AXB{sub Dw}, respectively, averaged over all IMRT and RA plans, while they were 97.6%, 98.2%, and 97.7%, respectively, on the coronal plane. For the verification of planar dose distribution within the NP region of the anthropomorphic phantom, the percentages of pixels that passed the gamma analysis with the {+-} 3%/3mm criteria were 95.1%, 91.3%, and 99.0% for AAA, AXB{sub Dm}, and AXB{sub Dw}, respectively, averaged over all IMRT and RA plans. Within the NP region where

  15. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    SciTech Connect

    Soh, R; Lee, J; Harianto, F

    2014-06-01

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute material for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.

  16. Dosimetric comparison of Acuros XB deterministic radiation transport method with Monte Carlo and model-based convolution methods in heterogeneous media

    PubMed Central

    Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas

    2011-01-01

    Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10

  17. XB-70A_takeoff

    NASA Video Gallery

    During the 1960s, XB-70 was the world's largest experimental aircraft. Capable of flight at speeds of three times the speed of sound (2,000 miles per hour) at altitudes of 70,000 feet, the XB-70 wa...

  18. XB-70A_flight

    NASA Video Gallery

    During the 1960s, XB-70 was the world's largest experimental aircraft. Capable of flight at speeds of three times the speed of sound (2,000 miles per hour) at altitudes of 70,000 feet, the XB-70 wa...

  19. Going the distance: validation of Acuros and AAA at an extended SSD of 400 cm.

    PubMed

    Lamichhane, Narottam; Patel, Vivek N; Studenski, Matthew T

    2016-01-01

    Accurate dose calculation and treatment delivery is essential for total body irradia-tion (TBI). In an effort to verify the accuracy of TBI dose calculation at our institu-tion, we evaluated both the Varian Eclipse AAA and Acuros algorithms to predict dose distributions at an extended source-to-surface distance (SSD) of 400 cm. Measurements were compared to calculated values for a 6 MV beam in physical and virtual phantoms at 400 cm SSD using open beams for both 5 × 5 and 40 × 40cm2 field sizes. Inline and crossline profiles were acquired at equivalent depths of 5 cm, 10 cm, and 20 cm. Depth-dose curves were acquired using EBT2 film and an ion chamber for both field sizes. Finally, a RANDO phantom was used to simulate an actual TBI treatment. At this extended SSD, care must be taken using the planning system as there is good relative agreement between measured and calculated profiles for both algorithms, but there are deviations in terms of the absolute dose. Acuros has better agreement than AAA in the penumbra region. PMID:27074473

  20. An investigation into the accuracy of Acuros(TM) BV in heterogeneous phantoms for a (192)Ir HDR source using LiF TLDs.

    PubMed

    Manning, Siobhan; Nyathi, Thulani

    2014-09-01

    The aim of this study was to evaluate the accuracy of the new Acuros(TM) BV algorithm using well characterized LiF:Mg,Ti TLD 100 in heterogeneous phantoms. TLDs were calibrated using an (192)Ir source and the AAPM TG-43 calculated dose. The Tölli and Johansson Large Cavity principle and Modified Bragg Gray principle methods confirm the dose calculated by TG-43 at a distance of 5 cm from the source to within 4 %. These calibrated TLDs were used to measure the dose in heterogeneous phantoms containing air, stainless steel, bone and titanium. The TLD results were compared with the AAPM TG-43 calculated dose and the Acuros calculated dose. Previous studies by other authors have shown a change in TLD response with depth when irradiated with an (192)Ir source. This TLD depth dependence was assessed by performing measurements at different depths in a water phantom with an (192)Ir source. The variation in the TLD response with depth in a water phantom was not found to be statistically significant for the distances investigated. The TLDs agreed with Acuros(TM) BV within 1.4 % in the air phantom, 3.2 % in the stainless steel phantom, 3 % in the bone phantom and 5.1 % in the titanium phantom. The TLDs showed a larger discrepancy when compared to TG-43 with a maximum deviation of 9.3 % in the air phantom, -11.1 % in the stainless steel phantom, -14.6 % in the bone phantom and -24.6 % in the titanium phantom. The results have shown that Acuros accounts for the heterogeneities investigated with a maximum deviation of -5.1 %. The uncertainty associated with the TLDs calibrated in the PMMA phantom is ±8.2 % (2SD). PMID:24866931

  1. Hunting for the Xb via radiative decays

    NASA Astrophysics Data System (ADS)

    Li, Gang; Wang, Wei

    2014-06-01

    In this paper, we study radiative decays of Xb, the counterpart of the famous X (3872) in the bottomonium-sector as a candidate for meson-meson molecule, into the γϒ (nS) (n = 1 , 2 , 3). Since it is likely that the Xb is below the BBbar* threshold and the mass difference between the neutral and charged bottom meson is small compared to the binding energy of the Xb, the isospin violating decay mode Xb → ϒ (nS)π+π- would be greatly suppressed. This will promote the importance of the radiative decays. We use the effective Lagrangian based on the heavy quark symmetry to explore the rescattering mechanism and calculate the partial widths. Our results show that the partial widths into γϒ (nS) are about 1 keV, and thus the branching fractions may be sizeable, considering the fact the total width may also be smaller than a few MeV like the X (3872). These radiative decay modes are of great importance in the experimental search for the Xb particularly at hadron collider. An observation of the Xb will provide a deeper insight into the exotic hadron spectroscopy and is helpful to unravel the nature of the states connected by the heavy quark symmetry.

  2. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  3. Evaluation of six TPS algorithms in computing entrance and exit doses.

    PubMed

    Tan, Yun I; Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun; Elliott, Alex

    2014-01-01

    Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%-3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PMID:24892349

  4. XB-70A during startup and ramp taxi

    NASA Technical Reports Server (NTRS)

    1968-01-01

    The XB-70 was the world's largest experimental aircraft. Capable of flight at speeds of three times the speed of sound (2,000 miles per hour) at altitudes of 70,000 feet, the XB-70 was used to collect in-flight information for use in the design of future supersonic aircraft, military and civilian. This 35-second video shows the startup of the XB-70A airplane engines, the beginning of its taxi to the runway, and a turn on the ramp that shows the unique configuration of this aircraft.

  5. Percentage depth dose calculation accuracy of model based algorithms in high energy photon small fields through heterogeneous media and comparison with plastic scintillator dosimetry.

    PubMed

    Alagar, Ananda Giri Babu; Kadirampatti Mani, Ganesh; Karunakaran, Kaviarasu

    2016-01-01

    Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials. PMID:26894345

  6. Hunting for the Xb via hidden bottomonium decays

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhou, Zhu

    2015-02-01

    In this work, we study the isospin conserved hidden bottomonium decay of Xb→ϒ (1 S )ω , where Xb is taken to be the counterpart of the famous X (3872 ) in the bottomonium sector as a candidate for the meson-meson molecule. Since it is likely that the Xb is below the B B¯* threshold and the mass difference between the neutral and charged bottom meson is small compared to the binding energy of the Xb, the isospin violating decay mode Xb→ϒ (n S )π+π- would be greatly suppressed. We use the effective Lagrangian based on the heavy quark symmetry to explore the rescattering mechanism of Xb→ϒ (1 S )ω and calculate the partial widths. Our results show that the partial width for the Xb→ϒ (1 S )ω is about tens of keVs. Taking into account the fact that the total width of Xb may be smaller than a few MeV like X (3872 ), the calculated branching ratios may reach to orders of 10-2. These hidden bottomonium decay modes are of great importance in the experimental search for the Xb particularly at the hadron collider. Also, the associated studies of hidden bottomonium decays Xb→ϒ (n S )γ , ϒ (n S )ω , and B B ¯γ may help us investigate the structure of Xb deeply. The experimental observation of Xb will provide us with further insight into the spectroscopy of exotic states and is helpful to probe the structure of the states connected by the heavy quark symmetry.

  7. Truncated form of tenascin-X, XB-S, interacts with mitotic motor kinesin Eg5.

    PubMed

    Endo, Toshiya; Ariga, Hiroyoshi; Matsumoto, Ken-ichi

    2009-01-01

    XB-S is a protein with an amino-terminal-truncated form of tenascin-X (TNXB). However, the precise roles of XB-S in vivo are unknown. In this study, to determine the role of XB-S in vivo, we screened XB-S-binding proteins. FLAG-tagged XB-S was transiently introduced into 293T cells. Then its associated proteins were purified by immunoprecipitation using an anti-FLAG antibody and its components were identified by mass spectrometric analyses. Mitotic motor kinesin Eg5 was identified in the immunoprecipitates. XB-S and Eg5 proteins were co-localized in the cytoplasm in interphase and mitosis, but XB-S did not localize on mitotic spindle microtubules, on which Eg5 prominently localized in mitosis. As for Eg5 binding to XB-S, glutathione S-transferase-fused XB-S expressed in vitro directly bound to full-length Eg5 translated in reticulocyte lysate, and the XB-S-binding region was located in the motor domain of Eg5. Furthermore, during cell cycle progression XB-S showed a similar expression profile to that of Eg5. These results suggest possible involvement of XB-S in the function of Eg5. PMID:18679583

  8. 77 FR 70147 - Fish and Wildlife Service 0648-XB088

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-23

    ... Notice of Availability (NOA) in the Federal Register on April 12, 2010 (75 FR 18482). The official... (75 FR 41157) extending the public comment period an additional 45 days to August 30, 2010. During the... National Oceanic and Atmospheric Administration Fish and Wildlife Service 0648-XB088 Environmental...

  9. Width of the exotic Xb(5568 ) state through its strong decay to Bs0π+

    NASA Astrophysics Data System (ADS)

    Agaev, S. S.; Azizi, K.; Sundu, H.

    2016-06-01

    The width of the newly observed exotic state Xb(5568 ) is calculated via its dominant strong decay to Bs0π+ using the QCD sum rule method on the light cone in conjunction with the soft-meson approximation. To this end, the vertex XbBsπ is studied and the strong coupling gXbBsπ is computed employing for Xb(5568 ) state the interpolating diquark-antidiquark current of the [s u ][b ¯d ¯] type. The obtained prediction for the decay width of Xb(5568 ) is confronted and a nice agreement found with the experimental data of the D0 Collaboration.

  10. SU-E-T-131: Dosimetric Impact and Evaluation of Different Heterogenity Algorithm in Volumetric Modulated Arc Therapy Plan for Stereotactic Ablative Radiotherapy Lung Treatment with the Flattening Filter Free Beam

    SciTech Connect

    Chung, J; Kim, J; Lee, J; Kim, Y

    2014-06-01

    Purpose: The present study aimed to investigate the dosimetric impacts of the anisotropic analytic algorithm (AAA) and the Acuros XB (AXB) plan for lung stereotactic ablative radiation therapy using flattening filter-free (FFF) beam. We retrospectively analyzed 10 patients. Methods: We retrospectively analyzed 10 patients. The dosimetric parameters for the target and organs at risk (OARs) from the treatment plans calculated with these dose calculation algorithms were compared. The technical parameters, such as the computation times and the total monitor units (MUs), were also evaluated. Results: A comparison of DVHs from AXB and AAA showed that the AXB plan produced a high maximum PTV dose by average 4.40% with a statistical significance but slightly lower mean PTV dose by average 5.20% compared to the AAA plans. The maximum dose to the lung was slightly higher in the AXB compared to the AAA. For both algorithms, the values of V5, V10 and V20 for ipsilateral lung were higher in the AXB plan more than those of AAA. However, these parameters for contralateral lung were comparable. The differences of maximum dose for the spinal cord and heart were also small. The computation time of AXB was found fast with the relative difference of 13.7% than those of AAA. The average of monitor units (MUs) for all patients was higher in AXB plans than in the AAA plans. These results indicated that the difference between AXB and AAA are large in heterogeneous region with low density. Conclusion: The AXB provided the advantages such as the accuracy of calculations and the reduction of the computation time in lung stereotactic ablative radiotherapy (SABR) with using FFF beam, especially for VMAT planning. In dose calculation with the media of different density, therefore, the careful attention should be taken regarding the impacts of different heterogeneity correction algorithms. The authors report no conflicts of interest.

  11. A summary of XB-70 sonic boom signature data

    NASA Astrophysics Data System (ADS)

    Maglieri, Domenic J.; Sothcott, Victor E.; Keefer, Thomas N., Jr.

    1992-04-01

    A compilation is provided of measured sonic boom signature data derived from 39 supersonic flights (43 passes) of the XB-70 airplane over the Mach number range of 1.11 to 2.92 and an altitude range of 30500 to 70300 ft. These tables represent a convenient hard copy version of available electronic files which include over 300 digitized sonic boom signatures with their corresponding spectra. Also included in the electronic files is information regarding ground track position, aircraft operating conditions, and surface and upper air weather observations for each of the 43 supersonic passes. In addition to the sonic boom signature data, a description is also provided of the XB-70 data base that was placed on electronic files along with a description of the method used to scan and digitize the analog/oscillograph sonic boom signature time histories. Such information is intended to enhance the value and utilization of the electronic files.

  12. A summary of XB-70 sonic boom signature data

    NASA Technical Reports Server (NTRS)

    Maglieri, Domenic J.; Sothcott, Victor E.; Keefer, Thomas N., Jr.

    1992-01-01

    A compilation is provided of measured sonic boom signature data derived from 39 supersonic flights (43 passes) of the XB-70 airplane over the Mach number range of 1.11 to 2.92 and an altitude range of 30500 to 70300 ft. These tables represent a convenient hard copy version of available electronic files which include over 300 digitized sonic boom signatures with their corresponding spectra. Also included in the electronic files is information regarding ground track position, aircraft operating conditions, and surface and upper air weather observations for each of the 43 supersonic passes. In addition to the sonic boom signature data, a description is also provided of the XB-70 data base that was placed on electronic files along with a description of the method used to scan and digitize the analog/oscillograph sonic boom signature time histories. Such information is intended to enhance the value and utilization of the electronic files.

  13. Application of the QCD light cone sum rule to tetraquarks: The strong vertices XbXbρ and XcXcρ

    NASA Astrophysics Data System (ADS)

    Agaev, S. S.; Azizi, K.; Sundu, H.

    2016-06-01

    The full version of the QCD light-cone sum rule method is applied to tetraquarks containing a single heavy b or c quark. To this end, investigations of the strong vertices XbXbρ and XcXcρ are performed, where Xb=[s u ][b ¯ d ¯ ] and Xc=[s u ][c ¯d ¯] are the exotic states built of four quarks of different flavors. The strong coupling constants GXbXbρ and GXcXcρ corresponding to these vertices are found using the ρ -meson leading- and higher-twist distribution amplitudes. In the calculations, Xb and Xc are treated as scalar bound states of a diquark and antidiquark.

  14. XB130 expression in human osteosarcoma: a clinical and experimental study.

    PubMed

    Wang, Xiaohui; Wang, Ruiguo; Liu, Zhaolong; Hao, Fengyun; Huang, Hai; Guo, Wenchen

    2015-01-01

    Identifying prognostic factors for osteosarcoma (OS) aids in the selection of patients who require more aggressive management. XB130 is a newly characterized adaptor protein that was reported to be a prognostic factor of certain tumor types. However, the association between XB130 expression and the prognosis of OS remains unknown. In the present study, we investigated the association between XB130 expression and clinicopathologic features and prognosis in patients suffering OS, and further investigated its potential role on OS cells in vitro and vivo. A retrospective immunohistochemical study of XB130 was performed on archival formalin-fixed paraffin-embedded specimens from 60 pairs of osteosarcoma and noncancerous bone tissues, and compared the expression of XB130 with clinicopathological parameters. We then investigate the effect of XB130 sliencing on invasion in vitro and lung metastasis in vivo of the human OS cell line. Immunohistochemical assays revealed that XB130 expression in OS tissues was significantly higher than that in corresponding noncancerous bone tissues (P=0.001). In addition, high XB130 expression more frequently occurred in OS tissues with advanced clinical stage (P=0.002) and positive distant metastasis (P=0.001). Moreover, OS patients with high XB130 expression had significantly shorter overall survival and disease-free survival (both P<0.001) when compared with patients with the low expression of XB130. The univariate analysis and multivariate analysis shown that high XB130 expression and distant metastasis were the independent poor prognostic factor.We showed that XB130 depletion by RNA interference inhibited invasion of XB130-rich U2OS cells in vitro and lung metastasis in vivo. This is the first study to reveal that XB130 overexpression may be related to the prediction of metastasis potency and poor prognosis for OS patients, suggesting that XB130 may serve as a prognostic marker for the optimization of clinical treatments. Furthermore

  15. XB130 expression in human osteosarcoma: a clinical and experimental study

    PubMed Central

    Wang, Xiaohui; Wang, Ruiguo; Liu, Zhaolong; Hao, Fengyun; Huang, Hai; Guo, Wenchen

    2015-01-01

    Identifying prognostic factors for osteosarcoma (OS) aids in the selection of patients who require more aggressive management. XB130 is a newly characterized adaptor protein that was reported to be a prognostic factor of certain tumor types. However, the association between XB130 expression and the prognosis of OS remains unknown. In the present study, we investigated the association between XB130 expression and clinicopathologic features and prognosis in patients suffering OS, and further investigated its potential role on OS cells in vitro and vivo. A retrospective immunohistochemical study of XB130 was performed on archival formalin-fixed paraffin-embedded specimens from 60 pairs of osteosarcoma and noncancerous bone tissues, and compared the expression of XB130 with clinicopathological parameters. We then investigate the effect of XB130 sliencing on invasion in vitro and lung metastasis in vivo of the human OS cell line. Immunohistochemical assays revealed that XB130 expression in OS tissues was significantly higher than that in corresponding noncancerous bone tissues (P = 0.001). In addition, high XB130 expression more frequently occurred in OS tissues with advanced clinical stage (P = 0.002) and positive distant metastasis (P = 0.001). Moreover, OS patients with high XB130 expression had significantly shorter overall survival and disease-free survival (both P < 0.001) when compared with patients with the low expression of XB130. The univariate analysis and multivariate analysis shown that high XB130 expression and distant metastasis were the independent poor prognostic factor.We showed that XB130 depletion by RNA interference inhibited invasion of XB130-rich U2OS cells in vitro and lung metastasis in vivo. This is the first study to reveal that XB130 overexpression may be related to the prediction of metastasis potency and poor prognosis for OS patients, suggesting that XB130 may serve as a prognostic marker for the optimization of clinical treatments

  16. Development of Outboard Nacelle for the XB-36 Airplane

    NASA Technical Reports Server (NTRS)

    Nuber, Robert J.

    1947-01-01

    An investigation of two 1/14 scale model configurations of an outboard nacelle for the XB-36 airplane was made in the Langley two-dimensional low-turbulence tunnels over a range of airplane lift coefficients (C (sub L) = 0.409 to C(sub L) = 0.943) for three representative flow conditions. The purpose of the investigation was to develop a low-drag wing-nacelle pusher combination which incorporated an internal air-flow system. The present investigation has led to the development of a nacelle which had external drag coefficients of similar order of magnitude to those obtained previously from tests of an inboard nacelle configuration at the corresponding operating lift coefficients and from approximately one-third to one-half of those of conventional tractor designs having the same ratio of wing thickness to nacelle diameter.

  17. Kinetic simulations of X-B and O-X-B mode conversion

    SciTech Connect

    Arefiev, A. V.; Du Toit, E. J.; Vann, R. G. L.; Köhn, A.; Holzhauer, E.; Shevchenko, V. F.

    2015-12-10

    We have performed fully-kinetic simulations of X-B and O-X-B mode conversion in one and two dimensional setups using the PIC code EPOCH. We have recovered the linear dispersion relation for electron Bernstein waves by employing relatively low amplitude incoming waves. The setups presented here can be used to study non-linear regimes of X-B and O-X-B mode conversion.

  18. Development of Inboard Nacelle for the XB-36 Airplane

    NASA Technical Reports Server (NTRS)

    Nuber, Robert J.

    1947-01-01

    A series of investigations of several 1/14-scale models of an inboard nacelle for the XB-36 airplane was made in the Langley two-dimensional low-turbulence tunnels. The purpose of these investigations was to develop a low-drag wing-nacelle pusher combination which incorporated an internal air-flow system. As a result of these investigations, a nacelle was developed which had external drag coefficients considerably lower than the original basic form with the external nacelle drag approximately one-half to two-thirds of those of conventional tractor designs. The largest reductions in drag resulted from sealing the gaps between the wing flaps and nacelle, reducing the thickness of the nacelle training-edge lip, and bringing the under-wing air inlet to the wing leading edge. It was found that without the engine cooling fan adequate cooling air would be available for all conditions of flight except for cruise and climb at 40,000 feet. Sufficient oil cooling at an altitude of 40,000 feet may be obtained by the use of flap-type exit doors.

  19. Members of the XB3 Family from Diverse Plant Species Induce Programmed Cell Death in Nicotiana benthamiana

    PubMed Central

    Huang, Xiaoen; Liu, Xueying; Chen, Xiuhua; Snyder, Anita; Song, Wen-Yuan

    2013-01-01

    Programmed cell death has been associated with plant immunity and senescence. The receptor kinase XA21 confers resistance to bacterial blight disease of rice (Oryza sativa) caused by Xanthomonas oryzae pv. oryzae (Xoo). Here we show that the XA21 binding protein 3 (XB3) is capable of inducing cell death when overexpressed in Nicotiana benthamiana. XB3 is a RING finger-containing E3 ubiquitin ligase that has been positively implicated in XA21-mediated resistance. Mutation abolishing the XB3 E3 activity also eliminates its ability to induce cell death. Phylogenetic analysis of XB3-related sequences suggests a family of proteins (XB3 family) with members from diverse plant species. We further demonstrate that members of the XB3 family from rice, Arabidopsis and citrus all trigger a similar cell death response in Nicotiana benthamiana, suggesting an evolutionarily conserved role for these proteins in regulating programmed cell death in the plant kingdom. PMID:23717500

  20. XB130 promotes bronchioalveolar stem cell and Club cell proliferation in airway epithelial repair and regeneration

    PubMed Central

    Toba, Hiroaki; Wang, Yingchun; Bai, Xiaohui; Zamel, Ricardo; Cho, Hae-Ra; Liu, Hongmei; Lira, Alonso; Keshavjee, Shaf; Liu, Mingyao

    2015-01-01

    Proliferation of bronchioalveolar stem cells (BASCs) is essential for epithelial repair. XB130 is a novel adaptor protein involved in the regulation of epithelial cell survival, proliferation and migration through the PI3K/Akt pathway. To determine the role of XB130 in airway epithelial injury repair and regeneration, a naphthalene-induced airway epithelial injury model was used with XB130 knockout (KO) mice and their wild type (WT) littermates. In XB130 KO mice, at days 7 and 14, small airway epithelium repair was significantly delayed with fewer number of Club cells (previously called Clara cells). CCSP (Club cell secreted protein) mRNA expression was also significantly lower in KO mice at day 7. At day 5, there were significantly fewer proliferative epithelial cells in the KO group, and the number of BASCs significantly increased in WT mice but not in KO mice. At day 7, phosphorylation of Akt, GSK-3β, and the p85α subunit of PI3K was observed in airway epithelial cells in WT mice, but to a much lesser extent in KO mice. Microarray data also suggest that PI3K/Akt-related signals were regulated differently in KO and WT mice. An inhibitory mechanism for cell proliferation and cell cycle progression was suggested in KO mice. XB130 is involved in bronchioalveolar stem cell and Club cell proliferation, likely through the PI3K/Akt/GSK-3β pathway. PMID:26360608

  1. XB-70A #1 liftoff with TB-58A chase aircraft

    NASA Technical Reports Server (NTRS)

    1960-01-01

    This photo shows XB-70A #1 taking off on a research flight, escorted by a TB-58 chase plane. The TB-58 (a prototype B-58 modified as a trainer) had a dash speed of Mach 2. This allowed it to stay close to the XB-70 as it conducted its research maneuvers. When the XB-70 was flying at or near Mach 3, the slower TB-58 could often keep up with it by flying lower and cutting inside the turns in the XB-70's flight path when these occurred. The XB-70 was the world's largest experimental aircraft. It was capable of flight at speeds of three times the speed of sound (roughly 2,000 miles per hour) at altitudes of 70,000 feet. It was used to collect in-flight information for use in the design of future supersonic aircraft, military and civilian. The major objectives of the XB-70 flight research program were to study the airplane's stability and handling characteristics, to evaluate its response to atmospheric turbulence, and to determine the aerodynamic and propulsion performance. In addition there were secondary objectives to measure the noise and friction associated with airflow over the airplane and to determine the levels and extent of the engine noise during takeoff, landing, and ground operations. The XB-70 was about 186 feet long, 33 feet high, with a wingspan of 105 feet. Originally conceived as an advanced bomber for the United States Air Force, the XB-70 was limited to production of two aircraft when it was decided to limit the aircraft's mission to flight research. The first flight of the XB-70 was made on Sept. 21, 1964. The number two XB-70 was destroyed in a mid-air collision on June 8, 1966. Program management of the NASA-USAF research effort was assigned to NASA in March 1967. The final flight was flown on Feb. 4, 1969. Designed by North American Aviation (later North American Rockwell and still later, a division of Boeing) the XB-70 had a long fuselage with a canard or horizontal stabilizer mounted just behind the crew compartment. It had a sharply swept 65

  2. Epitaxial semimetallic HfxZr1-xB2 templates for optoelectronic integration on silicon

    NASA Astrophysics Data System (ADS)

    Roucka, Radek; An, YuJin; Chizmeshya, Andrew V. G.; Tolle, John; Kouvetakis, John; D'Costa, Vijay R.; Menéndez, José; Crozier, Peter

    2006-12-01

    High quality heteroepitaxial HfxZr1-xB2 (x=0-1) buffers were grown directly on Si(111). The compositional dependence of the film structure and ab initio elastic constants were used to show that hexagonal HfxZr1-xB2 possess tensile in-plane strain (0.5%) as grown. High quality HfB2 films were also grown on strain compensating ZrB2-buffered Si(111). Initial reflectivity measurements of thick ZrB2 films agree with first principles calculations which predict that the reflectivity of HfB2 increases by 20% relative to ZrB2 in the 2-8eV range. These tunable structural, thermoelastic, and optical properties suggest that HfxZr1-xB2 templates should be suitable for broad integration of III nitrides with Si.

  3. One-dimensional full wave simulation on XB mode conversion in electron cyclotron heating

    SciTech Connect

    Kim, S. H.; Lee, H. Y.; Jo, J. G.; Hwang, Y. S.

    2014-06-15

    The XB mode conversion in electron cyclotron resonance frequency heating has been studied in detail through 1D full wave simulation. The field pattern depends on the density scale length, and the wave absorption near upper hybrid resonance is maximized beyond the R(X) mode cutoff density for optimized density scale length. The simulated mode conversion efficiency has been compared with that of an analytic formula, showing good agreements except for the phase dependent term of the X wave. The mode conversion efficiency is calculated for oblique injections as well, and it is found that the efficiency decreases as the injection angles increases. Short magnetic field scale length is confirmed to relax the short density scale length condition maximizing the XB mode conversion efficiency. Finally, the simulation code is used to analyze the mode conversion and power absorption of a pre-ionization plasma in versatile experiment spherical torus.

  4. Dosimetric evaluation of photon dose calculation under jaw and MLC shielding

    SciTech Connect

    Fogliata, A.; Clivio, A.; Vanetti, E.; Nicolini, G.; Belosi, M. F.; Cozzi, L.

    2013-10-15

    Purpose: The accuracy of photon dose calculation algorithms in out-of-field regions is often neglected, despite its importance for organs at risk and peripheral dose evaluation. The present work has assessed this for the anisotropic analytical algorithm (AAA) and the Acuros-XB algorithms implemented in the Eclipse treatment planning system. Specifically, the regions shielded by the jaw, or the MLC, or both MLC and jaw for flattened and unflattened beams have been studied.Methods: The accuracy in out-of-field dose under different conditions was studied for two different algorithms. Measured depth doses out of the field, for different field sizes and various distances from the beam edge were compared with the corresponding AAA and Acuros-XB calculations in water. Four volumetric modulated arc therapy plans (in the RapidArc form) were optimized in a water equivalent phantom, PTW Octavius, to obtain a region always shielded by the MLC (or MLC and jaw) during the delivery. Doses to different points located in the shielded region and in a target-like structure were measured with an ion chamber, and results were compared with the AAA and Acuros-XB calculations. Photon beams of 6 and 10 MV, flattened and unflattened were used for the tests.Results: Good agreement between calculated and measured depth doses was found using both algorithms for all points measured at depth greater than 3 cm. The mean dose differences (±1SD) were −8%± 16%, −3%± 15%, −16%± 18%, and −9%± 16% for measurements vs AAA calculations and −10%± 14%, −5%± 12%, −19%± 17%, and −13%± 14% for Acuros-XB, for 6X, 6 flattening-filter free (FFF), 10X, and 10FFF beams, respectively. The same figures for dose differences relative to the open beam central axis dose were: −0.1%± 0.3%, 0.0%± 0.4%, −0.3%± 0.3%, and −0.1%± 0.3% for AAA and −0.2%± 0.4%, −0.1%± 0.4%, −0.5%± 0.5%, and −0.3%± 0.4% for Acuros-XB. Buildup dose was overestimated with AAA, while Acuros-XB gave

  5. Performance of the 19XB 10-Stage Axial-Flow Compressor

    NASA Technical Reports Server (NTRS)

    Downing, Richard M.; Finger, Harold B.

    1947-01-01

    The 19xB compressor, which replaces the 19B coaapreseor and has the same length and diameter 88 the 19B compressor, was designed with 10 stages to deliver 30 pounds of air per second for a pressure ratio of 4.17 at an equivalent speed of 17,000 rpm; the 19B was designed with six stages for a pressure ratio of 2.7 at the same weight flow and speed as the 19XB compressor. The performance characteristics of the new compressor were determined at the NACA Cleveland laboratory at the request of the Bureau of Aeronautics, Navy Department. Results are presented of the investigation made to evaluate the over-all performance of the compressor, the effects of possible leakage past the rotor rear air seal, the effects of inserting instruments in each row of stator blades and in the first row of outlet guide vanes, and the effects of changing the temperature and the pressure of the inlet air. The results of the interstage surveys are also presented.

  6. Rotation vibration energy level clustering in the XB1 ground electronic state of PH2

    NASA Astrophysics Data System (ADS)

    Yurchenko, S. N.; Thiel, W.; Jensen, Per; Bunker, P. R.

    2006-10-01

    We use previously determined potential energy surfaces for the Renner-coupled XB1 and AA1 electronic states of the phosphino (PH 2) free radical in a calculation of the energies and wavefunctions of highly excited rotational and vibrational energy levels of the X˜ state. We show how spin-orbit coupling, the Renner effect, rotational excitation, and vibrational excitation affect the clustered energy level patterns that occur. We consider both 4-fold rotational energy level clustering caused by centrifugal distortion, and vibrational energy level pairing caused by local mode behaviour. We also calculate ab initio dipole moment surfaces for the X˜ and A˜ states, and the X˜-A˜ transition moment surface, in order to obtain spectral intensities.

  7. Valence fluctuations of europium in the boride Eu4Pd(29+x)B8.

    PubMed

    Gumeniuk, Roman; Schnelle, Walter; Ahmida, Mahmoud A; Abd-Elmeguid, Mohsen M; Kvashnina, Kristina O; Tsirlin, Alexander A; Leithe-Jasper, Andreas; Geibel, Christoph

    2016-03-23

    We synthesized a high-quality sample of the boride Eu4Pd(29+x)B8 (x  =  0.76) and studied its structural and physical properties. Its tetragonal structure was solved by direct methods and confirmed to belong to the Eu4Pd29B8 type. All studied physical properties indicate a valence fluctuating Eu state, with a valence decreasing continuously from about 2.9 at 5 K to 2.7 at 300 K. Maxima in the T dependence of the susceptibility and thermopower at around 135 K and 120 K, respectively, indicate a valence fluctuation energy scale on the order of 300 K. Analysis of the magnetic susceptibility evidences some inconsistencies when using the ionic interconfigurational fluctuation (ICF) model, thus suggesting a stronger relevance of hybridization between 4f and valence electrons compared to standard valence-fluctuating Eu systems. PMID:26895077

  8. Measured Sonic Boom Signatures Above and Below the XB-70 Airplane Flying at Mach 1.5 and 37,000 Feet

    NASA Technical Reports Server (NTRS)

    Maglieri, Domenic J.; Henderson, Herbert R.; Tinetti, Ana F.

    2011-01-01

    During the 1966-67 Edwards Air Force Base (EAFB) National Sonic Boom Evaluation Program, a series of in-flight flow-field measurements were made above and below the USAF XB-70 using an instrumented NASA F-104 aircraft with a specially designed nose probe. These were accomplished in the three XB-70 flights at about Mach 1.5 at about 37,000 ft. and gross weights of about 350,000 lbs. Six supersonic passes with the F-104 probe aircraft were made through the XB-70 shock flow-field; one above and five below the XB-70. Separation distances ranged from about 3000 ft. above and 7000 ft. to the side of the XB-70 and about 2000 ft. and 5000 ft. below the XB-70. Complex near-field "sawtooth-type" signatures were observed in all cases. At ground level, the XB-70 shock waves had not coalesced into the two-shock classical sonic boom N-wave signature, but contained three shocks. Included in this report is a description of the generating and probe airplanes, the in-flight and ground pressure measuring instrumentation, the flight test procedure and aircraft positioning, surface and upper air weather observations, and the six in-flight pressure signatures from the three flights.

  9. In vivo verification of radiation dose delivered to healthy tissue during radiotherapy for breast cancer

    NASA Astrophysics Data System (ADS)

    Lonski, P.; Taylor, M. L.; Hackworth, W.; Phipps, A.; Franich, R. D.; Kron, T.

    2014-03-01

    Different treatment planning system (TPS) algorithms calculate radiation dose in different ways. This work compares measurements made in vivo to the dose calculated at out-of-field locations using three different commercially available algorithms in the Eclipse treatment planning system. LiF: Mg, Cu, P thermoluminescent dosimeter (TLD) chips were placed with 1 cm build-up at six locations on the contralateral side of 5 patients undergoing radiotherapy for breast cancer. TLD readings were compared to calculations of Pencil Beam Convolution (PBC), Anisotropic Analytical Algorithm (AAA) and Acuros XB (XB). AAA predicted zero dose at points beyond 16 cm from the field edge. In the same region PBC returned an unrealistically constant result independent of distance and XB showed good agreement to measured data although consistently underestimated by ~0.1 % of the prescription dose. At points closer to the field edge XB was the superior algorithm, exhibiting agreement with TLD results to within 15 % of measured dose. Both AAA and PBC showed mixed agreement, with overall discrepancies considerably greater than XB. While XB is certainly the preferable algorithm, it should be noted that TPS algorithms in general are not designed to calculate dose at peripheral locations and calculation results in such regions should be treated with caution.

  10. The origin of the n-type behavior in rare earth borocarbide Y1-xB28.5C4.

    PubMed

    Mori, Takao; Nishimura, Toshiyuki; Schnelle, Walter; Burkhardt, Ulrich; Grin, Yuri

    2014-10-28

    Synthesis conditions, morphology, and thermoelectric properties of Y1-xB28.5C4 were investigated. Y1-xB28.5C4 is the compound with the lowest metal content in a series of homologous rare earth borocarbonitrides, which have been attracting interest as high temperature thermoelectric materials because they can embody the long-awaited counterpart to boron carbide, one of the few thermoelectric materials with a history of commercialization. It was revealed that the presence of boron carbide inclusions was the origin of the p-type behavior previously observed for Y1-xB28.5C4 in contrast to Y1-xB15.5CN and Y1-xB22C2N. In comparison with that of previous small flux-grown single crystals, a metal-poor composition of YB40C6 (Y0.71B28.5C4) in the synthesis successfully yielded sintered bulk Y1-xB28.5C4 samples apparently free of boron carbide inclusions. "Pure" Y1-xB28.5C4 was found to exhibit the same attractive n-type behavior as the other rare earth borocarbonitrides even though it is the most metal-poor compound among the series. Calculations of the electronic structure were carried out for Y1-xB28.5C4 as a representative of the series of homologous compounds and reveal a pseudo gap-like electronic density of states near the Fermi level mainly originating from the covalent borocarbonitride network. PMID:25091113

  11. Signature of the presence of a third body orbiting around XB 1916-053

    NASA Astrophysics Data System (ADS)

    Iaria, R.; Di Salvo, T.; Gambino, A. F.; Del Santo, M.; Romano, P.; Matranga, M.; Galiano, C. G.; Scarano, F.; Riggio, A.; Sanna, A.; Pintore, F.; Burderi, L.

    2015-10-01

    Context. The ultra-compact dipping source XB 1916-053 has an orbital period of close to 50 min and a companion star with a very low mass (less than 0.1 M⊙). The orbital period derivative of the source was estimated to be 1.5(3) × 10-11 s/s through analysing the delays associated with the dip arrival times obtained from observations spanning 25 years, from 1978 to 2002. Aims: The known orbital period derivative is extremely large and can be explained by invoking an extreme, non-conservative mass transfer rate that is not easily justifiable. We extended the analysed data from 1978 to 2014, by spanning 37 years, to verify whether a larger sample of data can be fitted with a quadratic term or a different scenario has to be considered. Methods: We obtained 27 delays associated with the dip arrival times from data covering 37 years and used different models to fit the time delays with respect to a constant period model. Results: We find that the quadratic form alone does not fit the data. The data are well fitted using a sinusoidal term plus a quadratic function or, alternatively, with a series of sinusoidal terms that can be associated with a modulation of the dip arrival times due to the presence of a third body that has an elliptical orbit. We infer that for a conservative mass transfer scenario the modulation of the delays can be explained by invoking the presence of a third body with mass between 0.10-0.14 M⊙, orbital period around the X-ray binary system of close to 51 yr and an eccentricity of 0.28 ± 0.15. In a non-conservative mass transfer scenario we estimate that the fraction of matter yielded by the degenerate companion star and accreted onto the neutron star is β = 0.08, the neutron star mass is ≥2.2 M⊙, and the companion star mass is 0.028 M⊙. In this case, we explain the sinusoidal modulation of the delays by invoking the presence of a third body with orbital period of 26 yr and mass of 0.055 M⊙. Conclusions: From the analysis of the delays

  12. Simulated Altitude Performance of Combustor of Westinghouse 19XB-1 Jet-Propulsion Engine

    NASA Technical Reports Server (NTRS)

    Childs, J. Howard; McCafferty, Richard J.

    1948-01-01

    A 19XB-1 combustor was operated under conditions simulating zero-ram operation of the 19XB-1 turbojet engine at various altitudes and engine speeds. The combustion efficiencies and the altitude operational limits were determined; data were also obtained on the character of the combustion, the pressure drop through the combustor, and the combustor-outlet temperature and velocity profiles. At altitudes about 10,000 feet below the operational limits, the flames were yellow and steady and the temperature rise through the combustor increased with fuel-air ratio throughout the range of fuel-air ratios investigated. At altitudes near the operational limits, the flames were blue and flickering and the combustor was sluggish in its response to changes in fuel flow. At these high altitudes, the temperature rise through the combustor increased very slowly as the fuel flow was increased and attained a maximum at a fuel-air ratio much leaner than the over-all stoichiometric; further increases in fuel flow resulted in decreased values of combustor temperature rise and increased resonance until a rich-limit blow-out occurred. The approximate operational ceiling of the engine as determined by the combustor, using AN-F-28, Amendment-3, fuel, was 30,400 feet at a simulated engine speed of 7500 rpm and increased as the engine speed was increased. At an engine speed of 16,000 rpm, the operational ceiling was approximately 48,000 feet. Throughout the range of simulated altitudes and engine speeds investigated, the combustion efficiency increased with increasing engine speed and with decreasing altitude. The combustion efficiency varied from over 99 percent at operating conditions simulating high engine speed and low altitude operation to less than 50 percent at conditions simulating operation at altitudes near the operational limits. The isothermal total pressure drop through the combustor was 1.82 times as great as the inlet dynamic pressure. As expected from theoretical

  13. Induction of truncated form of tenascin-X (XB-S) through dissociation of HDAC1 from SP-1/HDAC1 complex in response to hypoxic conditions

    SciTech Connect

    Kato, Akari; Endo, Toshiya; Abiko, Shun; Ariga, Hiroyoshi; Matsumoto, Ken-ichi

    2008-08-15

    ABSTRACT: XB-S is an amino-terminal truncated protein of tenascin-X (TNX) in humans. The levels of the XB-S transcript, but not those of TNX transcripts, were increased upon hypoxia. We identified a critical hypoxia-responsive element (HRE) localized to a GT-rich element positioned from - 1410 to - 1368 in the XB-S promoter. Using an electrophoretic mobility shift assay (EMSA), we found that the HRE forms a DNA-protein complex with Sp1 and that GG positioned in - 1379 and - 1378 is essential for the binding of the nuclear complex. Transfection experiments in SL2 cells, an Sp1-deficient model system, with an Sp1 expression vector demonstrated that the region from - 1380 to - 1371, an HRE, is sufficient for efficient activation of the XB-S promoter upon hypoxia. The EMSA and a chromatin immunoprecipitation (ChIP) assay showed that Sp1 together with the transcriptional repressor histone deacetylase 1 (HDAC1) binds to the HRE of the XB-S promoter under normoxia and that hypoxia causes dissociation of HDAC1 from the Sp1/HDAC1 complex. The HRE promoter activity was induced in the presence of a histone deacetylase inhibitor, trichostatin A, even under normoxia. Our results indicate that the hypoxia-induced activation of the XB-S promoter is regulated through dissociation of HDAC1 from an Sp1-binding HRE site.

  14. Dip Spectroscopy of the Low Mass X-Ray Binary XB 1254-690

    NASA Technical Reports Server (NTRS)

    Smale, Alan P.; Church, M. J.; BalucinskaChurch, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We observed the low mass X-ray binary XB 1254-690 with the Rossi X-ray Timing Explorer in 2001 May and December. During the first observation strong dipping on the 3.9-hr orbital period and a high degree of variability were observed, along with "shoulders" approx. 15% deep during extended intervals on each side of the main dips. The first observation also included pronounced flaring activity. The non-dip spectrum obtained using the PCA instrument was well-described by a two-component model consisting of a blackbody with kT = 1.30 +/- 0.10 keV plus a cut-off power law representation of Comptonized emission with power law photon index 1.10 +/- 0.46 and a cut-off energy of 5.9(sup +3.0, sub -1.4) keV. The intensity decrease in the shoulders of dipping is energy-independent, consistent with electron scattering in the outer ionized regions of the absorber. In deep dipping the depth of dipping reached 100%, in the energy band below 5 keV, indicating that all emitting regions were covered by absorber. Intensity-selected dip spectra were well-fit by a model in which the point-like blackbody is rapidly covered, while the extended Comptonized emission is progressively overlapped by the absorber, with the, covering fraction rising to 95% in the deepest portion of the dip. The intensity of this component in the dip spectra could be modeled by a combination of electron scattering and photoelectric absorption. Dipping did not occur during the 2001 December observation, but remarkably, both bursting and flaring were observed contemporaneously.

  15. Hyperactivation of the Human Plasma Membrane Ca2+ Pump PMCA h4xb by Mutation of Glu99 to Lys*

    PubMed Central

    Mazzitelli, Luciana R.; Adamo, Hugo P.

    2014-01-01

    The transport of calcium to the extracellular space carried out by plasma membrane Ca2+ pumps (PMCAs) is essential for maintaining low Ca2+ concentrations in the cytosol of eukaryotic cells. The activity of PMCAs is controlled by autoinhibition. Autoinhibition is relieved by the binding of Ca2+-calmodulin to the calmodulin-binding autoinhibitory sequence, which in the human PMCA is located in the C-terminal segment and results in a PMCA of high maximal velocity of transport and high affinity for Ca2+. Autoinhibition involves the intramolecular interaction between the autoinhibitory domain and a not well defined region of the molecule near the catalytic site. Here we show that the fusion of GFP to the C terminus of the h4xb PMCA causes partial loss of autoinhibition by specifically increasing the Vmax. Mutation of residue Glu99 to Lys in the cytosolic portion of the M1 transmembrane helix at the other end of the molecule brought the Vmax of the h4xb PMCA to near that of the calmodulin-activated enzyme without increasing the apparent affinity for Ca2+. Altogether, the results suggest that the autoinhibitory interaction of the extreme C-terminal segment of the h4 PMCA is disturbed by changes of negatively charged residues of the N-terminal region. This would be consistent with a recently proposed model of an autoinhibited form of the plant ACA8 pump, although some differences are noted. PMID:24584935

  16. Hyperactivation of the human plasma membrane Ca2+ pump PMCA h4xb by mutation of Glu99 to Lys.

    PubMed

    Mazzitelli, Luciana R; Adamo, Hugo P

    2014-04-11

    The transport of calcium to the extracellular space carried out by plasma membrane Ca(2+) pumps (PMCAs) is essential for maintaining low Ca(2+) concentrations in the cytosol of eukaryotic cells. The activity of PMCAs is controlled by autoinhibition. Autoinhibition is relieved by the binding of Ca(2+)-calmodulin to the calmodulin-binding autoinhibitory sequence, which in the human PMCA is located in the C-terminal segment and results in a PMCA of high maximal velocity of transport and high affinity for Ca(2+). Autoinhibition involves the intramolecular interaction between the autoinhibitory domain and a not well defined region of the molecule near the catalytic site. Here we show that the fusion of GFP to the C terminus of the h4xb PMCA causes partial loss of autoinhibition by specifically increasing the Vmax. Mutation of residue Glu(99) to Lys in the cytosolic portion of the M1 transmembrane helix at the other end of the molecule brought the Vmax of the h4xb PMCA to near that of the calmodulin-activated enzyme without increasing the apparent affinity for Ca(2+). Altogether, the results suggest that the autoinhibitory interaction of the extreme C-terminal segment of the h4 PMCA is disturbed by changes of negatively charged residues of the N-terminal region. This would be consistent with a recently proposed model of an autoinhibited form of the plant ACA8 pump, although some differences are noted. PMID:24584935

  17. Direct X-B mode conversion for high-β national spherical torus experiment in nonlinear regime

    SciTech Connect

    Ali Asgarian, M. E-mail: maa@msu.edu; Parvazian, A.; Abbasi, M.; Verboncoeur, J. P.

    2014-09-15

    Electron Bernstein wave (EBW) can be effective for heating and driving currents in spherical tokamak plasmas. Power can be coupled to EBW via mode conversion of the extraordinary (X) mode wave. The most common and successful approach to study the conditions for optimized mode conversion to EBW was evaluated analytically and numerically using a cold plasma model and an approximate kinetic model. The major drawback in using radio frequency waves was the lack of continuous wave sources at very high frequencies (above the electron plasma frequency), which has been addressed. A future milestone is to approach high power regime, where the nonlinear effects become significant, exceeding the limits of validity for present linear theory. Therefore, one appropriate tool would be particle in cell (PIC) simulation. The PIC method retains most of the nonlinear physics without approximations. In this work, we study the direct X-B mode conversion process stages using PIC method for incident wave frequency f{sub 0} = 15 GHz, and maximum amplitude E{sub 0} = 10{sup 5 }V/m in the national spherical torus experiment (NSTX). The modelling shows a considerable reduction in X-B mode conversion efficiency, C{sub modelling} = 0.43, due to the presence of nonlinearities. Comparison of system properties to the linear state reveals predominant nonlinear effects; EBW wavelength and group velocity in comparison with linear regime exhibit an increment around ∼36% and 17%, respectively.

  18. On unusual temperature dependence of the upper critical field in YNi 2- xFe xB 2C

    NASA Astrophysics Data System (ADS)

    Kumary, T. Geetha; Kalavathi, S.; Valsakumar, M. C.; Hariharan, Y.; Radhakrishnan, T. S.

    1997-02-01

    Measurement of upper critica field in YNi 2- xFe xB 2C is reported for x = 0, 0.05, 0.10, and 0.15. An anomalous positive curvature is observed for a range of temperatures close to Tc, for all x. As x is increased, the temperature interval over which the curvature in Hc2( T) is positive, is reduced and the system shows a tendency to go to the usual behaviour exhibited by conventional low temperature superconductors. Most of the theories based on a Fermi liquid normal state seem to be inadequate to understand this anomalous behaviour. It is speculated that this anomalous behaviour of Hc2( T) signifies the presence of strong correlations in the pristine YNi 2B 2C and that strong correlation effects become less and less important upon substitution of Ni with Fe.

  19. Phosphatidylinositol 3-Kinase-Associated Protein (PI3KAP)/XB130 Crosslinks Actin Filaments through Its Actin Binding and Multimerization Properties In Vitro and Enhances Endocytosis in HEK293 Cells.

    PubMed

    Yamanaka, Daisuke; Akama, Takeshi; Chida, Kazuhiro; Minami, Shiro; Ito, Koichi; Hakuno, Fumihiko; Takahashi, Shin-Ichiro

    2016-01-01

    Actin-crosslinking proteins control actin filament networks and bundles and contribute to various cellular functions including regulation of cell migration, cell morphology, and endocytosis. Phosphatidylinositol 3-kinase-associated protein (PI3KAP)/XB130 has been reported to be localized to actin filaments (F-actin) and required for cell migration in thyroid carcinoma cells. Here, we show a role for PI3KAP/XB130 as an actin-crosslinking protein. First, we found that the carboxyl terminal region of PI3KAP/XB130 containing amino acid residues 830-840 was required and sufficient for localization to F-actin in NIH3T3 cells, and this region is directly bound to F-actin in vitro. Moreover, actin-crosslinking assay revealed that recombinant PI3KAP/XB130 crosslinked F-actin. In general, actin-crosslinking proteins often multimerize to assemble multiple actin-binding sites. We then investigated whether PI3KAP/XB130 could form a multimer. Blue native-PAGE analysis showed that recombinant PI3KAP/XB130 was detected at 250-1200 kDa although the molecular mass was approximately 125 kDa, suggesting that PI3KAP/XB130 formed multimers. Furthermore, we found that the amino terminal 40 amino acids were required for this multimerization by co-immunoprecipitation assay in HEK293T cells. Deletion mutants of PI3KAP/XB130 lacking the actin-binding region or the multimerizing region did not crosslink actin filaments, indicating that actin binding and multimerization of PI3KAP/XB130 were necessary to crosslink F-actin. Finally, we examined roles of PI3KAP/XB130 on endocytosis, an actin-related biological process. Overexpression of PI3KAP/XB130 enhanced dextran uptake in HEK 293 cells. However, most of the cells transfected with the deletion mutant lacking the actin-binding region incorporated dextran to a similar extent as control cells. Taken together, these results demonstrate that PI3KAP/XB130 crosslinks F-actin through both its actin-binding region and multimerizing region and plays

  20. Phosphatidylinositol 3-Kinase-Associated Protein (PI3KAP)/XB130 Crosslinks Actin Filaments through Its Actin Binding and Multimerization Properties In Vitro and Enhances Endocytosis in HEK293 Cells

    PubMed Central

    Yamanaka, Daisuke; Akama, Takeshi; Chida, Kazuhiro; Minami, Shiro; Ito, Koichi; Hakuno, Fumihiko; Takahashi, Shin-Ichiro

    2016-01-01

    Actin-crosslinking proteins control actin filament networks and bundles and contribute to various cellular functions including regulation of cell migration, cell morphology, and endocytosis. Phosphatidylinositol 3-kinase-associated protein (PI3KAP)/XB130 has been reported to be localized to actin filaments (F-actin) and required for cell migration in thyroid carcinoma cells. Here, we show a role for PI3KAP/XB130 as an actin-crosslinking protein. First, we found that the carboxyl terminal region of PI3KAP/XB130 containing amino acid residues 830–840 was required and sufficient for localization to F-actin in NIH3T3 cells, and this region is directly bound to F-actin in vitro. Moreover, actin-crosslinking assay revealed that recombinant PI3KAP/XB130 crosslinked F-actin. In general, actin-crosslinking proteins often multimerize to assemble multiple actin-binding sites. We then investigated whether PI3KAP/XB130 could form a multimer. Blue native-PAGE analysis showed that recombinant PI3KAP/XB130 was detected at 250–1200 kDa although the molecular mass was approximately 125 kDa, suggesting that PI3KAP/XB130 formed multimers. Furthermore, we found that the amino terminal 40 amino acids were required for this multimerization by co-immunoprecipitation assay in HEK293T cells. Deletion mutants of PI3KAP/XB130 lacking the actin-binding region or the multimerizing region did not crosslink actin filaments, indicating that actin binding and multimerization of PI3KAP/XB130 were necessary to crosslink F-actin. Finally, we examined roles of PI3KAP/XB130 on endocytosis, an actin-related biological process. Overexpression of PI3KAP/XB130 enhanced dextran uptake in HEK 293 cells. However, most of the cells transfected with the deletion mutant lacking the actin-binding region incorporated dextran to a similar extent as control cells. Taken together, these results demonstrate that PI3KAP/XB130 crosslinks F-actin through both its actin-binding region and multimerizing region and

  1. XMM-Newton and Chandra Observations of the M31 Globular Cluster Black Hole Candidate XB135: A Heavyweight Contender Cut Down to Size

    NASA Astrophysics Data System (ADS)

    Barnard, R.; Primini, F.; Garcia, M. R.; Kolb, U. C.; Murray, S. S.

    2015-04-01

    CXOM31 J004252.030+413107.87 is one of the brightest X-ray sources within the D25 region of M31, and associated with a globular cluster known as B135; we therefore call this X-ray source XB135. XB135 is a low-mass X-ray binary (LMXB) that apparently exhibited hard state characteristics at 0.3-10 keV luminosities 4-6× {{10}38} erg s-1, and the hard state is only observed below ˜10% Eddington. If true, the accretor would be a high-mass black hole (BH) (≳50 {{M}⊙ }); such a BH may be formed from direct collapse of a metal-poor, high-mass star, and the very low metallicity of B135 (0.015 {{Z}⊙ }) makes such a scenario plausible. We have obtained new XMM-Newton and Chandra HRC observations to shed light on the nature of this object. We find from the HRC observation that XB135 is a single point source located close to the center of B135. The new XMM-Newton spectrum is consistent with a rapidly spinning ˜10-20 {{M}⊙ } BH in the steep power law or thermal dominant state, but inconsistent with the hard state that we previously assumed. We cannot formally reject three component emission models that have been associated with high luminosity neutron star (NS) LMXBs (known as Z-sources); however, we prefer a BH accretor. We note that deeper observation of XB135 could discriminate against an NS accretor.

  2. Optimization of permanent magnetic properties in melt spun Co82-xHf12+xB6 (x = 0-4) nanocomposites

    NASA Astrophysics Data System (ADS)

    Chang, H. W.; Liao, M. C.; Shih, C. W.; Chang, W. C.; Shaw, C. C.

    2015-05-01

    Magnetic properties of melt spun Co82-xHf12+xB6 ribbons made with various wheel speeds have been studied. The ribbons with x = 0-1 are not easy to crystallize and thus display soft magnetic behavior even at wheel speed of 10 m/s. In contrast, the ribbons with x = 1.5-4 at optimized wheel speed exhibit good permanent magnetic properties of Br = 0.41-0.59 T, iHc = 120-400 kA/m, and (BH)max = 10.6-48.1 kJ/m3. The optimal magnetic properties of Br = 0.59 T, iHc = 384 kA/m, and (BH)max = 48.1 kJ/m3 are achieved for Co80Hf14B6 ribbons at wheel speed of 30 m/s. X-ray diffraction, thermo-magnetic analysis, and transmission electron microscopy results show that good hard magnetic properties of Co82-xHf12+xB6 ribbons (x = 2-4) are originated from the Co11Hf2 phase well coupled with the Co phase. The change of magnetic properties for Co82-xHf12+xB6 ribbons spun at various wheel speeds is correlated to microstructure and phase constitution. The strong exchange-coupling effect between magnetic grains for the ribbons with x = 2-3 at wheel speed = 30 m/s leads to remarkable permanent magnetic properties. The presented results suggest that the optimized Co82-xHf12+xB6 (x = 2-3) ribbons are much suitable than others (x = 0-1.5 and 4) for making rare earth and Pt-free magnets.

  3. Strong electron-phonon coupling in Be{1-x}B{2}C{2}: ab initio studies

    NASA Astrophysics Data System (ADS)

    Moudden, A. H.

    2008-07-01

    Several structures for off-stoichiometric beryllium diboride dicarbide Be{1-x}B2C2 have been designed, and their properties studied from first-principles density functional methods. Among the most stable phases examined, the layered hexagonal structures are shown to exhibit various features in the electronic properties and in the lattice dynamics reminiscent of the superconducting magnesium diboride and alkaline earth-intercalated graphites. For substoichiometric composition x˜ 1/3, the system is found metallic with a moderately strong electron-phonon coupling through a predominant contribution arising from high frequency streching modes modulating the σ-bonding of the B C network, and a weaker contribution at medium frequency range of the phonon spectra, arising from the intercalent motion coupled to the π-bonding states. Further, anharmonicities emerging from the proximity of the Fermi level to the σ-band edge, contributes to reduce the phonon softening hence stabilizing the structure. All these effects appear to combine favourably to produce a high temperature phonon-superconductivity.

  4. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  5. Electronic and vibronic states of the acceptor-bound-exciton complex (A0,X) in CdS. II. Determination of the fine structure of the (A0,XB) electronic states by high-resolution excitation spectroscopy

    NASA Astrophysics Data System (ADS)

    Gutowski, J.

    1985-03-01

    In a previous paper [R. Baumert, I. Broser, J. Gutowski, and A. Hoffman, Phys. Rev. B 27, 6263 (1983)] it has been shown that high-density, high-resolution excitation spectroscopy gives new information on the electronic and vibronic excited states of the acceptor-bound-exciton complex (A0,XA) with two holes from the A valence band in CdS. We now report on corresponding results for the (A0,XB) configuration which includes one hole from the second B valence band. This complex is unstable for a very fast B-->A hole conversion, and therefore gives rise to a set of excitation resonances of the I1 luminescence arising from the (A0,XA) recombination. A detailed theoretical analysis of the energetic structure of the (A0,XB) complex including the dependence on the excitation intensity and on an applied magnetic field allows the correct assignment of the excitation resonances to the (A0,XB) fine-structure levels originating from the interparticle-exchange interactions. It is shown that the magnetic field is a suitable means of distinguishing the different (A0,XB) ground-state levels. The magnetic field also creates allowed transitions which are dipole forbidden in the zero-field case. A self-contained model of the (A0,XB) complex thus can be developed, including all symmetry states and yielding adequate values for the exchange energies within the complex.

  6. Altitude-Wind-Tunnel Investigation of the 19B-2, 19B-8, and 19XB-1 Jet Propulsion Engines. 3; Performance and Windmilling Drag Characteristics

    NASA Technical Reports Server (NTRS)

    Fleming, WIlliam A.; Dietz, Robert O., Jr.

    1957-01-01

    The performance characteristics of the 19B-8 and 19XB-1 turbojet engines and the windmilling-drag characteristics of the 19B-6 engine were determined in the Cleveland altitude wind tunnel. The investigations were conducted on the 19B-8 engine at simulated altitudes from 5000 to 25,000 feet with various free-stream ram-pressure ratios and on the 19XB--1 engine at simulated altitudes from 5000 to 30,000 feet with approximately static free-stream conditions. Data for these two engines are presented to show the effect of altitude, free-stream ram-pressure ratio, and tail-pipe-nozzle area on engine performance. A 21-percent reduction in tail-pipe-nozzle area of the 19B-8 engine increased the let thrust 43 percent the net thrust 72 percent, and the fuel consumption 64 percent. An increase in free-stream ram-pressure ratio raised the jet thrust and the air flow and lowered the net thrust throughout the entire range of engine speeds for the 19B-8 engine. At similar operating conditions, the corrected jet thrust and corrected air flow were approximately the same for both engines, and the corrected specific fuel consumption based on jet thrust was lower for the 19XB-1 engine than for the 19B-8 engine. The thrust and air-flow data obtained with both engines at various altitudes for a given free-stream rampressure ratio were generalized to standard sea-level atmospheric conditions. The performance parameters involving fuel consumption generalized only at high engine speeds at simulated altitudes as high as 15,000 feet. The windmilling drag of the 19B-8 engine increased rapidly as the airspeed was increased.

  7. Suppression of superconductivity in LuxZr1 -xB12: Evidence of static magnetic moments induced by nonmagnetic impurities

    NASA Astrophysics Data System (ADS)

    Sluchanko, N. E.; Azarevich, A. N.; Anisimov, M. A.; Bogach, A. V.; Gavrilkin, S. Yu.; Gilmanov, M. I.; Glushkov, V. V.; Demishev, S. V.; Khoroshilov, A. L.; Dukhnenko, A. V.; Mitsen, K. V.; Shitsevalova, N. Yu.; Filippov, V. B.; Voronov, V. V.; Flachbart, K.

    2016-02-01

    Based on low-temperature resistivity, heat capacity, and magnetization investigations, we show that the unusually strong suppression of superconductivity in LuxZr1 -xB12 (x <8 % ) BCS-type superconductors is caused by the emergence of static spin polarization in the vicinity of nonmagnetic lutetium impurities. The analysis of the obtained results points to a formation of static magnetic moments with μeff≈6 μB per Lu3 + ion (1S0 ground state, 4 f14 configuration) incorporated in the superconducting ZrB12 matrix. The size of these spin-polarized nanodomains was estimated to be about 5 Å.

  8. Excitation of ion Bernstein waves as the dominant parametric decay channel in direct X-B mode conversion for typical spherical torus

    NASA Astrophysics Data System (ADS)

    Abbasi, Mustafa; Sadeghi, Yahya; Sobhanian, Samad; Asgarian, Mohammad Ali

    2016-03-01

    The electron Bernstein wave (EBW) is typically the only wave in the electron cyclotron (EC) range that can be applied in spherical tokamaks for heating and current drive (H&CD). Spherical tokamaks (STs) operate generally in high- β regimes, in which the usual EC ordinary (O) and extraordinary (X) modes are cut off. As it was recently investigated the existence of EBWs at nonlinear regime thus the next step would be the probable nonlinear phenomena study which are predicted to be occurred within the high levels of injected power. In this regard, parametric instabilities are considered as the major channels for losses at the X-B conversion. Hence, we have to consider their effects at the UHR region which can reduce the X-B conversion efficiency. In the case of EBW heating (EBH) at high power density, the nonlinear effects can arise. Particularly at the UHR position, the group velocity is strongly reduced, which creates a high energy density and subsequently a high amplitude electric field. Therefore, a part of the input wave can decay into daughter waves via parametric instability (PI). Thus, via the present research, the excitations of ion Bernstein waves as the dominant decay channels are investigated and also an estimate for the threshold power in terms of experimental parameters related to the fundamental mode of instability is proposed.

  9. Structure and magnetic properties of (Nd,Dy) 16(Fe,Co) 76-xTi xB 8 powders prepared by mechanical alloying

    NASA Astrophysics Data System (ADS)

    Jakubowicz, J.; Le Breton, J.-M.

    2006-06-01

    Nanocrystalline (Nd,Dy) 16(Fe,Co) 76-xTi xB 8 magnets were prepared by mechanical alloying and respective heat treatment at 973-1073 K/30-60 min. An addition of 0.5 at % of Ti results in an increase of coercivity from 796 to 1115 kA m -1. Partial substitution of Nd by Dy results in an additional increase of coercivity up to 1234 kA m -1. Mössbauer investigations shows that for x⩽1 the (Nd,Dy) 16(Fe,Co) 76-xTi xB 8 powders are single phase. For higher Ti contents ( x>1) the mechanically alloyed powders heat treated at 973 K are no more single phase, and the coercivity decreases due to the presence of an amorphous phase. A heat treatment at a higher temperature (1073 K) for longer time (1 h) results in the full recrystallisation of powders. The mean hyperfine field of the Nd 2Fe 14B phase decreases for titanium contents of 0⩽ x⩽1, and remains constant for x>1. This indicates that the Ti content in the Nd 2Fe 14B phase reaches its maximum value.

  10. Search for the Xb and other hidden-beauty states in the π+π- ϒ (1 S) channel at ATLAS

    NASA Astrophysics Data System (ADS)

    Aad, G.; Abbott, B.; Abdallah, J.; Abdel Khalek, S.; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Agatonovic-Jovin, T.; Aguilar-Saavedra, J. A.; Agustoni, M.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alio, L.; Alison, J.; Allbrooke, B. M. M.; Allison, L. J.; Allport, P. P.; Almond, J.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Altheimer, A.; Alvarez Gonzalez, B.; Alviggi, M. G.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnal, V.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Avolio, G.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Baas, A. E.; Bacci, C.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Backus Mayes, J.; Badescu, E.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Balek, P.; Balli, F.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Bansal, V.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Bartsch, V.; Bassalat, A.; Basye, A.; Bates, R. L.; Batley, J. R.; Battaglia, M.; Battistin, M.; Bauer, F.; Bawa, H. S.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, S.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bedikian, S.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, K.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez Garcia, J. A.; Benjamin, D. P.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Beringer, J.; Bernard, C.; Bernat, P.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertsche, C.; Bertsche, D.; Besana, M. I.; Besjes, G. J.; Bessidskaia, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Bierwagen, K.; Biesiada, J.; Biglietti, M.; Bilbao De Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boddy, C. R.; Boehler, M.; Boek, T. T.; Bogaerts, J. A.; Bogdanchikov, A. G.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Borri, M.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutouil, S.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozic, I.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brazzale, S. F.; Brelier, B.; Brendlinger, K.; Brennan, A. J.; Brenner, R.; Bressler, S.; Bristow, K.; Bristow, T. M.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Bromberg, C.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Brown, J.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bryngemark, L.; Buanes, T.; Buat, Q.

    2015-01-01

    This Letter presents a search for a hidden-beauty counterpart of the X (3872) in the mass ranges of 10.05-10.31 GeV and 10.40-11.00 GeV, in the channel Xb →π+π- ϒ (1 S) (→μ+μ-), using 16.2 fb-1 of √{ s} = 8 TeVpp collision data collected by the ATLAS detector at the LHC. No evidence for new narrow states is found, and upper limits are set on the product of the Xb cross section and branching fraction, relative to those of the ϒ (2S), at the 95% confidence level using the CLS approach. These limits range from 0.8% to 4.0%, depending on mass. For masses above 10.1 GeV, the expected upper limits from this analysis are the most restrictive to date. Searches for production of the ϒ (13DJ), ϒ (10 860), and ϒ (11 020) states also reveal no significant signals.

  11. Charge transport in HoxLu1 -xB12 : Separating positive and negative magnetoresistance in metals with magnetic ions

    NASA Astrophysics Data System (ADS)

    Sluchanko, N. E.; Khoroshilov, A. L.; Anisimov, M. A.; Azarevich, A. N.; Bogach, A. V.; Glushkov, V. V.; Demishev, S. V.; Krasnorussky, V. N.; Samarin, N. A.; Shitsevalova, N. Yu.; Filippov, V. B.; Levchenko, A. V.; Pristas, G.; Gabani, S.; Flachbart, K.

    2015-06-01

    The magnetoresistance (MR) Δ ρ /ρ of the cage-glass compound HoxLu1 -xB12 with various concentrations of magnetic holmium ions (x ≤0.5 ) has been studied in detail concurrently with magnetization M (T ) and Hall effect investigations on high-quality single crystals at temperatures 1.9-120 K and in magnetic field up to 80 kOe. The undertaken analysis of Δ ρ /ρ allows us to conclude that the large negative magnetoresistance (nMR) observed in the vicinity of the Néel temperature is caused by scattering of charge carriers on magnetic clusters of Ho3 + ions, and that these nanosize regions with antiferromagnetic (AF) exchange inside may be considered as short-range-order AF domains. It was shown that the Yosida relation -Δ ρ /ρ ˜M2 provides an adequate description of the nMR effect for the case of Langevin-type behavior of magnetization. Moreover, a reduction of Ho-ion effective magnetic moments in the range 3-9 μB was found to develop both with temperature lowering and under the increase of holmium content. A phenomenological description of the large positive quadratic contribution Δ ρ /ρ ˜μD2H2 which dominates in HoxLu1 -xB12 in the intermediate temperature range 20-120 K allows us to estimate the drift mobility exponential changes μD˜T-α with α =1.3 -1.6 depending on Ho concentration. An even more comprehensive behavior of magnetoresistance has been found in the AF state of HoxLu1 -xB12 where an additional linear positive component was observed and attributed to charge-carrier scattering on the spin density wave (SDW). High-precision measurements of Δ ρ /ρ =f (H ,T ) have allowed us also to reconstruct the magnetic H-T phase diagram of Ho0.5Lu0.5B12 and to resolve its magnetic structure as a superposition of 4 f (based on localized moments) and 5 d (based on SDW) components.

  12. Investigation of the Stability and Control Characteristics of a 1/20-Scale Model of the Consolidated Vultee XB-53 Airplane in the Langley Free-Flight Tunnel

    NASA Technical Reports Server (NTRS)

    Bennett, Charles V.

    1947-01-01

    An investigation of the low-speed, power-off stability and control characteristics of a 1/20-scale model of the Consolidated Vultee XB-53 airplane has been conducted in the Langley free-flight tunnel. In the investigation it was found that with flaps neutral satisfactory flight behavior at low speeds was obtainable with an increase in height of the vertical tail and with the inboard slats opened. In the flap-down slat-open condition the longitudinal stability was satisfactory, but it was impossible to obtain satisfactory lateral-flight characteristics even with the increase in height of the vertical tail because of the negative effective dihedral, low directional stability, and large-adverse yawing moments of the ailerons.

  13. Comparisons of Predictions of the XB-70-1 Longitudinal Stability and Control Derivatives with Flight Results for Six Flight Conditions

    NASA Technical Reports Server (NTRS)

    Wolowicz, C. H.; Yancey, R. B.

    1973-01-01

    Preliminary correlations of flight-determined and predicted stability and control characteristics of the XB-70-1 reported in NASA TN D-4578 were subject to uncertainties in several areas which necessitated a review of prediction techniques particularly for the longitudinal characteristics. Reevaluation and updating of the original predictions, including aeroelastic corrections, for six specific flight-test conditions resulted in improved correlations of static pitch stability with flight data. The original predictions for the pitch-damping derivative, on the other hand, showed better correlation with flight data than the updated predictions. It appears that additional study is required in the application of aeroelastic corrections to rigid model wind-tunnel data and the theoretical determination of dynamic derivatives for this class of aircraft.

  14. Preparation and properties of a new ternary phase Mg3+xNi7-xB2 (0.17≤x≤0.66) and its Cu-doping effect

    NASA Astrophysics Data System (ADS)

    Liao, Chang-Zhong; Dong, Cheng; Shih, Kaimin; Zeng, Lingmin; He, Bing; Cao, Wenhuan; Yang, Lihong

    2015-03-01

    In recent years, the materials in the B-Mg-Ni system have been intensively studied due to their excellent properties of hydrogen storage and superconductivity. Solving the crystal structure of phases in this system will facilitate an understanding of the mechanism of their physical properties. In this study, we report the preparation, crystal structure and physical properties of a new ternary phase Mg3+xNi7-xB2 in the B-Mg-Ni system. The Mg3+xNi7-xB2 phase was prepared by solid-state reactions at 1073 K and its crystal structure was determined and refined using X-ray powder diffraction data. The Mg3+xNi7-xB2 phase crystallizes in the Ca3Ni7B2 structure type (space group R-3m, no. 166) with a=4.9496(3)-5.0105(6) Å, c=20.480(1)-20.581(1) Å depending on the x value, where x varies from 0.17 to 0.66. Two samples with nominal compositions Mg10Ni20B6 and Mg12Ni18B6 were characterized by magnetization and electric resistivity measurements in the temperature range from 5 K to room temperature. Both samples exhibited metallic behavior and showed spin-glass-like behavior with a spin freezing temperature (Tf) around 33 K. A study of the Cu-doping effect showed that limited Cu content can be doped into the Mg3+xNi7-xB2 compound and Tf decreases as the Cu content increases.

  15. Equivalent Longitudinal Area Distributions of the B-58 and XB-70-1 Airplanes for Use in Wave Drag and Sonic Boom Calculations

    NASA Technical Reports Server (NTRS)

    Tinetti, Ana F.; Maglieri, Domenic J.; Driver, Cornelius; Bobbitt, Percy J.

    2011-01-01

    A detailed geometric description, in wave drag format, has been developed for the Convair B-58 and North American XB-70-1 delta wing airplanes. These descriptions have been placed on electronic files, the contents of which are described in this paper They are intended for use in wave drag and sonic boom calculations. Included in the electronic file and in the present paper are photographs and 3-view drawings of the two airplanes, tabulated geometric descriptions of each vehicle and its components, and comparisons of the electronic file outputs with existing data. The comparisons include a pictorial of the two airplanes based on the present geometric descriptions, and cross-sectional area distributions for both the normal Mach cuts and oblique Mach cuts above and below the vehicles. Good correlation exists between the area distributions generated in the late 1950s and 1960s and the present files. The availability of these electronic files facilitates further validation of sonic boom prediction codes through the use of two existing data bases on these airplanes, which were acquired in the 1960s and have not been fully exploited.

  16. Competing anisotropies on 3d sub-lattice of YNi{sub 4–x}Co{sub x}B compounds

    SciTech Connect

    Caraballo Vivas, R. J.; Rocco, D. L.; Reis, M. S.; Caldeira, L.; Coelho, A. A.

    2014-08-14

    The magnetic anisotropy of 3d sub-lattices has an important rule on the overall magnetic properties of hard magnets. Intermetallics alloys with boron (R-Co/Ni-B, for instance) belong to those hard magnets family and are useful objects to help to understand the magnetic behavior of 3d sub-lattice, specially when the rare earth ions R do not have magnetic nature, like YCo{sub 4}B ferromagnetic material. Interestingly, YNi{sub 4}B is a paramagnetic material and Ni ions do not contribute to the magnetic anisotropy. We focused therefore our attention to YNi{sub 4–x}Co{sub x}B series, with x = 0, 1, 2, 3, and 4. The magnetic anisotropy of these compounds is deeper described using statistical and preferential models of Co occupation among the possible Wyckoff positions into the CeCo{sub 4}B type hexagonal structure. We found that the preferential model is the most suitable to explain the magnetization experimental data.

  17. Microstructure and magnetic properties of isotropic bulk NdxFe94-xB6 (x=6,8,10) nanocomposite magnets prepared by spark plasma sintering

    NASA Astrophysics Data System (ADS)

    Yue, Ming; Zhang, Jiuxing; Tian, Meng; Liu, X. B.

    2006-04-01

    Nd2Fe14B/α-Fe isotropic bulk nanocomposite magnets were prepared by spark plasma sintering (SPS) technique using melt-spun powders with a nominated composition of NdxFe94-xB6, with x=6, 8, and 10. It was found that higher sintering temperature improved the densification of the magnets, while it deteriorated their magnetic properties simultaneously due to the excess crystal grain growth. An increased compressive pressure led to better magnetic properties and higher density for the SPS magnets. An increase in the Nd amount resulted in a gradual increase in intrinsic coercivity and an obvious reduction of the remanence of the magnets simultaneously. A magnet with the composition of Nd8Fe86B6 possessed a Br of 0.99 T, a Hci of 386 kA/m, and a (BH)max of 101 kJ/m3 under the optimal sintering condition. In addition, microstructure observation using transmission electron microscopy showed that compared with the starting powders the full-density magnets nearly maintain the morphology, indicating that there was no sign of pronounced crystal grain growth during the densification process.

  18. A 2.15 hr ORBITAL PERIOD FOR THE LOW-MASS X-RAY BINARY XB 1832-330 IN THE GLOBULAR CLUSTER NGC 6652

    SciTech Connect

    Engel, M. C.; Heinke, C. O.; Sivakoff, G. R.; Elshamouty, K. G.; Edmonds, P. D. E-mail: heinke@ualberta.ca

    2012-03-10

    We present a candidate orbital period for the low-mass X-ray binary (LMXB) XB 1832-330 in the globular cluster NGC 6652 using a 6.5 hr Gemini South observation of the optical counterpart of the system. Light curves in g' and r' for two LMXBs in the cluster, sources A and B in previous literature, were extracted and analyzed for periodicity using the ISIS image subtraction package. A clear sinusoidal modulation is evident in both of A's curves, of amplitude {approx}0.11 mag in g' and {approx}0.065 mag in r', while B's curves exhibit rapid flickering, of amplitude {approx}1 mag in g' and {approx}0.5 mag in r'. A Lomb-Scargle test revealed a 2.15 hr periodic variation in the magnitude of A with a false alarm probability less than 10{sup -11}, and no significant periodicity in the light curve for B. Though it is possible that saturated stars in the vicinity of our sources partially contaminated our signal, the identification of A's binary period is nonetheless robust.

  19. Effect of Slot-Entry Skirt Extensions on Aerodynamic Characteristics of a Wing Section of the XB-36 Airplane Equipped with a Double Slotted Flap

    NASA Technical Reports Server (NTRS)

    Cahill, Jones F.

    1947-01-01

    An investigation was made in the Langley two-dimensional low-turbulence tunnel on a wing section for the XB-36 airplane equipped with a double slotted flap to determine the effect on lift and drag of various slot-entry skirt extension. A skirt extension of 0.787 deg. was found to provide the best combination of high maximum lift with flap deflected and law drag with flap retracted. The data showed that the maximum lift at intermediate (20 deg. to 45 deg.) flap deflections was lowered considerably by the slot-entry extension; but at high flap deflections the effect was small. An increase in Reynolds number from 2.4 million to 6.0 million increased the maximum.lift coefficient at a flap deflection of 55 deg. from 3.12 to 3.30 and from 1.18 to 1.40 for the flap retracted condition, but did not greatly affect the maximum lift coefficient for intermediate flap deflections. The flap and fore flap load data indicated that the maximum lift coefficients at high flap deflections are limited by a breakdown in the flow over the .flaps.

  20. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  1. Altitude-Wind-Tunnel investigation of Westinghouse 19B-2, 19B-8, and 19XB-1 jet-propulsion engines V : combustion chamber performance

    NASA Technical Reports Server (NTRS)

    Boyd, Bemrose

    1948-01-01

    Pressure losses through the combustion chamber and the combustion efficiency of the 19B-2 and 19B-8 jet-propulsion engines and the combustion efficiency of the 19XB-1 jet-propulsion engine are presented.Data were obtained from an investigation of the complete engine in the NACA Cleveland altitude wind tunnel over a range of simulated altitudes from 5000 to 30,000 feet and tunnel Mach numbers from less than 0.100 to 0.455. The combustion-chamber pressure loss due to friction was higher for the 19B-2 combustion chamber than for the 19B-8. The 19B-2 combustion chamber had a screen of 40-percent open area interposed between the compressor outlet and the combustion-chamber inlet. The screen for the 19B-8 combustion chamber had a 60-percent open area, which except for a small difference in tail-pipe-nozzle outlet area represents the only point of difference between the standard 19B-2 and 19B-8 combustion chambers. The pressure loss due to heat addition to the flowing gases in the combustion chamber was approximately the same for the 19B-2 and 19B-8 configurations. Altitude and tunnel Mach number had no significant effect on the over-all total-pressure loss through the combustion chamber. A decrease in tail-pipe-nozzle outlet area (tail cone out) resulted in a decrease in combustion-chamber total-pressure loss at high engine speeds.

  2. Swift Reveals a ~5.7 Day Super-orbital Period in the M31 Globular Cluster X-Ray Binary XB158

    NASA Astrophysics Data System (ADS)

    Barnard, R.; Garcia, M. R.; Murray, S. S.

    2015-03-01

    The M31 globular cluster X-ray binary XB158 (a.k.a. Bo 158) exhibits intensity dips on a 2.78 hr period in some observations, but not others. The short period suggests a low mass ratio, and an asymmetric, precessing disk due to additional tidal torques from the donor star since the disk crosses the 3:1 resonance. Previous theoretical three-dimensional smoothed particle hydrodynamical modeling suggested a super-orbital disk precession period 29 ± 1 times the orbital period, i.e., ~81 ± 3 hr. We conducted a Swift monitoring campaign of 30 observations over ~1 month in order to search for evidence of such a super-orbital period. Fitting the 0.3-10 keV Swift X-Ray Telescope luminosity light curve with a sinusoid yielded a period of 5.65 ± 0.05 days, and a >5σ improvement in χ2 over the best fit constant intensity model. A Lomb-Scargle periodogram revealed that periods of 5.4-5.8 days were detected at a >3σ level, with a peak at 5.6 days. We consider this strong evidence for a 5.65 day super-orbital period, ~70% longer than the predicted period. The 0.3-10 keV luminosity varied by a factor of ~5, consistent with variations seen in long-term monitoring from Chandra. We conclude that other X-ray binaries exhibiting similar long-term behavior are likely to also be X-ray binaries with low mass ratios and super-orbital periods.

  3. The electronic structure, mechanical and thermodynamic properties of Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides

    SciTech Connect

    He, TianWei; Jiang, YeHua E-mail: jfeng@seas.harvard.edu; Zhou, Rong; Feng, Jing E-mail: jfeng@seas.harvard.edu

    2015-08-21

    The mechanical properties, electronic structure and thermodynamic properties of the Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides were calculated by first-principles methods. The elastic constants show that these ternary borides are mechanically stable. Formation enthalpy of Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides are at the range of −118.09 kJ/mol to −40.14 kJ/mol. The electronic structures and chemical bonding characteristics are analyzed by the density of states. Mo{sub 2}FeB{sub 2} has the largest shear and Young's modulus because of its strong chemical bonding, and the values are 204.3 GPa and 500.3 GPa, respectively. MoCo{sub 2}B{sub 4} shows the lowest degree of anisotropy due to the lack of strong direction in the bonding. The Debye temperature of MoFe{sub 2}B{sub 4} is the largest among the six phases, which means that MoFe{sub 2}B{sub 4} possesses the best thermal conductivity. Enthalpy shows an approximately linear function of the temperature above 300 K. The entropy of these compounds increase rapidly when the temperature is below 450 K. The Gibbs free energy decreases with the increase in temperature. MoCo{sub 2}B{sub 4} has the lowest Gibbs free energy, which indicates the strongest formation ability in Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides.

  4. Wind-tunnel/flight correlation study of aerodynamic characteristics of a large flexible supersonic cruise airplane (XB-70-1). 3: A comparison between characteristics predicted from wind-tunnel measurements and those measured in flight

    NASA Technical Reports Server (NTRS)

    Arnaiz, H. H.; Peterson, J. B., Jr.; Daugherty, J. C.

    1980-01-01

    A program was undertaken by NASA to evaluate the accuracy of a method for predicting the aerodynamic characteristics of large supersonic cruise airplanes. This program compared predicted and flight-measured lift, drag, angle of attack, and control surface deflection for the XB-70-1 airplane for 14 flight conditions with a Mach number range from 0.76 to 2.56. The predictions were derived from the wind-tunnel test data of a 0.03-scale model of the XB-70-1 airplane fabricated to represent the aeroelastically deformed shape at a 2.5 Mach number cruise condition. Corrections for shape variations at the other Mach numbers were included in the prediction. For most cases, differences between predicted and measured values were within the accuracy of the comparison. However, there were significant differences at transonic Mach numbers. At a Mach number of 1.06 differences were as large as 27 percent in the drag coefficients and 20 deg in the elevator deflections. A brief analysis indicated that a significant part of the difference between drag coefficients was due to the incorrect prediction of the control surface deflection required to trim the airplane.

  5. Synthesis and characterizations of water-based ferrofluids of substituted ferrites [Fe 1-xB xFe 2O 4, B=Mn, Co ( x=0-1)] for biomedical applications

    NASA Astrophysics Data System (ADS)

    Giri, Jyotsnendu; Pradhan, Pallab; Somani, Vaibhav; Chelawat, Hitesh; Chhatre, Shreerang; Banerjee, Rinti; Bahadur, Dhirendra

    Nanomagnetic particles have great potential in the biomedical applications like MRI contrast enhancement, magnetic separation, targeting delivery and hyperthermia. In this paper, we have explored the possibility of biomedical applications of [Fe 1-xB xFe 2O 4, B=Mn, Co] ferrite. Superparamagnetic particles of substituted ferrites [Fe 1-xB xFe 2O 4, B=Mn, Co ( x=0-1)] and their fatty acid coated water base ferrofluids have been successfully prepared by co-precipitation technique using NH4OH/TMAH (Tetramethylammonium hydroxide) as base. In vitro cytocompatibility study of different magnetic fluids was done using HeLa (human cervical carcinoma) cell lines. Co 2+-substituted ferrite systems (e.g. CoFe 2O 4) is more toxic than Mn 2+-substituted ferrite systems (e.g. MnFe 2O 4, Fe 0.6Mn 0.4Fe 2O 4). The later is as cytocompatible as Fe 3O 4. Thus, Fe 1-xMn xFe 2O 4 could be useful in biomedical applications like MRI contrast agent and hyperthermia treatment of cancer.

  6. Sci—Thur AM: YIS - 05: 10X-FFF VMAT for Lung SABR: an Investigation of Peripheral Dose

    SciTech Connect

    Mader, J; Mestrovic, A

    2014-08-15

    Flattening Filter Free (FFF) beams exhibit high dose rates, reduced head scatter, leaf transmission and leakage radiation. For VMAT lung SABR, treatment time can be significantly reduced using high dose rate FFF beams while maintaining plan quality and accuracy. Another possible advantage offered by FFF beams for VMAT lung SABR is the reduction in peripheral dose. The focus of this study was to investigate and quantify the reduction of peripheral dose offered by FFF beams for VMAT lung SABR. The peripheral doses delivered by VMAT Lung SABR treatments using FFF and flattened beams were investigated for the Varian Truebeam linac. This study was conducted in three stages, (1): ion chamber measurement of peripheral dose for various plans, (2): validation of AAA, Acuros XB and Monte Carlo for peripheral dose using measured data, and (3): using the validated Monte Carlo model to evaluate peripheral doses for 6 VMAT lung SABR treatments. Three energies, 6X, 10X, and 10X-FFF were used for all stages. Measured data indicates that 10X-FFF delivers the lowest peripheral dose of the three energies studied. AAA and Acuros XB dose calculation algorithms were identified as inadequate, and Monte Carlo was validated for accurate peripheral dose prediction. The Monte Carlo-calculated VMAT lung SABR plans show a significant reduction in peripheral dose for 10X-FFF plans compared to the standard 6X plans, while no significant reduction was showed when compared to 10X. This reduction combined with shorter treatment time makes 10X-FFF beams the optimal choice for superior VMAT lung SABR treatments.

  7. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  8. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  9. Wind-tunnel/flight correlation study of aerodynamic characteristics of a large flexible supersonic cruise airplane (XB-701) 2: Extrapolation of wind-tunnel data to full-scale conditions

    NASA Technical Reports Server (NTRS)

    Peterson, J. B., Jr.; Mann, M. J.; Sorrells, R. B., III; Sawyer, W. C.; Fuller, D. E.

    1980-01-01

    The results of calculations necessary to extrapolate performance data on an XB-70-1 wind tunnel model to full scale at Mach numbers from 0.76 to 2.53 are presented. The extrapolation was part of a joint program to evaluate performance prediction techniques for large flexible supersonic airplanes similar to a supersonic transport. The extrapolation procedure included: interpolation of the wind tunnel data at the specific conditions of the flight test points; determination of the drag increments to be applied to the wind tunnel data, such as spillage drag, boundary layer trip drag, and skin friction increments; and estimates of the drag items not represented on the wind tunnel model, such as bypass doors, roughness, protuberances, and leakage drag. In addition, estimates of the effects of flexibility of the airplane were determined.

  10. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  11. A theoretical investigation of mixing thermodynamics, age-hardening potential, and electronic structure of ternary M11-xM2xB2 alloys with AlB2 type structure

    NASA Astrophysics Data System (ADS)

    Alling, B.; Högberg, H.; Armiento, R.; Rosen, J.; Hultman, L.

    2015-05-01

    Transition metal diborides are ceramic materials with potential applications as hard protective thin films and electrical contact materials. We investigate the possibility to obtain age hardening through isostructural clustering, including spinodal decomposition, or ordering-induced precipitation in ternary diboride alloys. By means of first-principles mixing thermodynamics calculations, 45 ternary M11-xM2xB2 alloys comprising MiB2 (Mi = Mg, Al, Sc, Y, Ti, Zr, Hf, V, Nb, Ta) with AlB2 type structure are studied. In particular Al1-xTixB2 is found to be of interest for coherent isostructural decomposition with a strong driving force for phase separation, while having almost concentration independent a and c lattice parameters. The results are explained by revealing the nature of the electronic structure in these alloys, and in particular, the origin of the pseudogap at EF in TiB2, ZrB2, and HfB2.

  12. Effects of the substitution of P2O5 by B2O3 on the structure and dielectric properties in (90-x) P2O5-xB2O3-10Fe2O3 glasses.

    PubMed

    Sdiri, N; Elhouichet, H; Dhaou, H; Mokhtar, F

    2014-01-01

    90%[xB2O3 (1-x) P2O5] 10%Fe2O3, glass systems where (x=0 mol%, 5 mol%, 10 mol%, 15 mol%, 20 mol%) was prepared via a melt quenching technique. The structure of glass is investigated at room temperature by, Raman and EPR spectroscopy. Raman studies have been performed on these glasses to examine the distribution of different borate and phosphate structural groups. We have noted an increase from 3 to 4 in the coordination number of the boron atoms from 3 to 4, i.e., the conversion of the BO3 triangular structural units into BO4 tetrahedra. The samples have been investigated by means of electron paramagnetic resonance (EPR). The results obtained from the gef=4.28 EPR line are typical of the occurrence of iron (III) occupying substitutional sites. Moreover, the dielectric sizes such as ε'(ω), ε″(ω), imaginary parts of the electrical modulus, M(*)(ω) and the loss tanδ, their variation with frequency at room temperature show a decrease in relaxation intensity with an increase in the concentration of (B2O3). On the present work, we have found a weak extinction index with our new glass. PMID:23995605

  13. A theoretical investigation of mixing thermodynamics, age-hardening potential, and electronic structure of ternary M11–xM2xB2 alloys with AlB2 type structure

    PubMed Central

    Alling, B.; Högberg, H.; Armiento, R.; Rosen, J.; Hultman, L.

    2015-01-01

    Transition metal diborides are ceramic materials with potential applications as hard protective thin films and electrical contact materials. We investigate the possibility to obtain age hardening through isostructural clustering, including spinodal decomposition, or ordering-induced precipitation in ternary diboride alloys. By means of first-principles mixing thermodynamics calculations, 45 ternary M11–xM2xB2 alloys comprising MiB2 (Mi = Mg, Al, Sc, Y, Ti, Zr, Hf, V, Nb, Ta) with AlB2 type structure are studied. In particular Al1–xTixB2 is found to be of interest for coherent isostructural decomposition with a strong driving force for phase separation, while having almost concentration independent a and c lattice parameters. The results are explained by revealing the nature of the electronic structure in these alloys, and in particular, the origin of the pseudogap at EF in TiB2, ZrB2, and HfB2. PMID:25970763

  14. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  15. Synthesis, crystal structure and properties of Mg3B36Si9C and related rare earth compounds RE3-xB36Si9C (RE=Y, Gd-Lu)

    NASA Astrophysics Data System (ADS)

    Ludwig, Thilo; Pediaditakis, Alexis; Sagawe, Vanessa; Hillebrecht, Harald

    2013-08-01

    We report on the synthesis and characterisation of Mg3B36Si9C. Black single crystals of hexagonal shape were yielded from the elements at 1600 °C in h-BN crucibles welded in Ta ampoules. The crystal structure (space group R3barm, a=10.0793(13) Å, c=16.372(3) Å, 660 refl., 51 param., R1(F)=0.019; wR2(F2)=0.051) is characterized by a Kagome-net of B12 icosahedra, ethane like Si8-units and disordered SiC-dumbbells. Vibrational spectra show typical features of boron-rich borides and Zintl phases. Mg3B36Si9C is stable against HF/HNO3 and conc. NaOH. The micro-hardness is 17.0 GPa (Vickers) and 14.5 GPa (Knoop), respectively. According to simple electron counting rules Mg3B36Si9C is an electron precise compound. Band structure calculations reveal a band gap of 1.0 eV in agreement to the black colour. Interatomic distances obtained from the refinement of X-ray data are biased and falsified by the disorder of the SiC-dumbbell. The most evident structural parameters were obtained by relaxation calculation. Composition and carbon content were confirmed by WDX measurements. The small but significant carbon content is necessary by structural reasons and frequently caused by contaminations. The rare earth compounds RE3-xB36Si9C (RE=Y, Dy-Lu) are isotypic. Single crystals were grown from a silicon melt and their structures refined. The partial occupation of the RE-sites fits to the requirements of an electron-precise composition. According to the displacement parameters a relaxation should be applied to obtain correct structural parameters.

  16. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  17. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  18. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  19. Ionic conductivity of mixed glass former 0.35Na(2)O + 0.65[xB(2)O(3) + (1 - x)P(2)O(5)] glasses.

    PubMed

    Christensen, Randilynn; Olson, Garrett; Martin, Steve W

    2013-12-27

    The mixed glass former effect (MGFE) is defined as a nonlinear and nonadditive change in the ionic conductivity with changing glass former fraction at constant modifier composition between two binary glass forming compositions. In this study, mixed glass former (MGF) sodium borophosphate glasses, 0.35Na2O + 0.65[xB2O3 + (1 - x)P2O5], 0 ≤ x ≤ 1, have been prepared, and their sodium ionic conductivity has been studied. The ionic conductivity exhibits a strong, positive MGFE that is caused by a corresponding strongly negative nonlinear, nonadditive change in the conductivity activation energy with changing glass former content, x. We describe a successful model of the MGFE in the conductivity activation energy terms of the underlying short-range order (SRO) phosphate and borate glass former structures present in these glasses. To do this, we have developed a modified Anderson-Stuart (A-S) model to explain the decrease in the activation energy in terms of the atomic level composition dependence (x) of the borate and phosphate SRO structural groups, the Na(+) ion concentration, and the Na(+) mobility. In our revision of the A-S model, we carefully improve the treatment of the cation jump distance and incorporate an effective Madelung constant to account for many body coulomb potential effects. Using our model, we are able to accurately reproduce the composition dependence of the activation energy with a single adjustable parameter, the effective Madelung constant, that changes systematically with composition, x, and varies by no more than 10% from values typical of oxide ceramics. Our model suggests that the decreasing columbic binding energies that govern the concentration of the mobile cations are sufficiently strong in these glasses to overcome the increasing volumetric strain energies (mobility) caused by strongly increasing glass-transition temperatures combined with strongly decreasing molar volumes of these glasses. The dependence of the columbic binding

  20. A Review on the Use of Grid-Based Boltzmann Equation Solvers for Dose Calculation in External Photon Beam Treatment Planning

    PubMed Central

    Kan, Monica W. K.; Yu, Peter K. N.; Leung, Lucullus H. T.

    2013-01-01

    Deterministic linear Boltzmann transport equation (D-LBTE) solvers have recently been developed, and one of the latest available software codes, Acuros XB, has been implemented in a commercial treatment planning system for radiotherapy photon beam dose calculation. One of the major limitations of most commercially available model-based algorithms for photon dose calculation is the ability to account for the effect of electron transport. This induces some errors in patient dose calculations, especially near heterogeneous interfaces between low and high density media such as tissue/lung interfaces. D-LBTE solvers have a high potential of producing accurate dose distributions in and near heterogeneous media in the human body. Extensive previous investigations have proved that D-LBTE solvers were able to produce comparable dose calculation accuracy as Monte Carlo methods with a reasonable speed good enough for clinical use. The current paper reviews the dosimetric evaluations of D-LBTE solvers for external beam photon radiotherapy. This content summarizes and discusses dosimetric validations for D-LBTE solvers in both homogeneous and heterogeneous media under different circumstances and also the clinical impact on various diseases due to the conversion of dose calculation from a conventional convolution/superposition algorithm to a recently released D-LBTE solver. PMID:24066294

  1. Dosimetric comparison of a 6-MV flattening-filter and a flattening-filter-free beam for lung stereotactic ablative radiotherapy treatment

    NASA Astrophysics Data System (ADS)

    Kim, Yon-Lae; Chung, Jin-Beom; Kim, Jae-Sung; Lee, Jeong-Woo; Kim, Jin-Young; Kang, Sang-Won; Suh, Tae-Suk

    2015-11-01

    The purpose of this study was to test the feasibility of clinical usage of a flattening-filter-free (FFF) beam for treatment with lung stereotactic ablative radiotherapy (SABR). Ten patients were treated with SABR and a 6-MV FFF beam for this study. All plans using volumetric modulated arc therapy (VMAT) were optimized in the Eclipse treatment planning system (TPS) by using the Acuros XB (AXB) dose calculation algorithm and were delivered by using a Varian TrueBeam ™ linear accelerator equipped with a high-definition (HD) multi-leaf collimator. The prescription dose used was 48 Gy in 4 fractions. In order to compare the plan using a conventional 6-MV flattening-filter (FF) beam, the SABR plan was recalculated under the condition of the same beam settings used in the plan employing the 6-MV FFF beam. All dose distributions were calculated by using Acuros XB (AXB, version 11) and a 2.5-mm isotropic dose grid. The cumulative dosevolume histograms (DVH) for the planning target volume (PTV) and all organs at risk (OARs) were analyzed. Technical parameters, such as total monitor units (MUs) and the delivery time, were also recorded and assessed. All plans for target volumes met the planning objectives for the PTV ( i.e., V95% > 95%) and the maximum dose ( i.e., Dmax < 110%) revealing adequate target coverage for the 6-MV FF and FFF beams. Differences in DVH for target volumes (PTV and clinical target volume (CTV)) and OARs on the lung SABR plans from the interchange of the treatment beams were small, but showed a marked reduction (52.97%) in the treatment delivery time. The SABR plan with a FFF beam required a larger number of MUs than the plan with the FF beam, and the mean difference in MUs was 4.65%. This study demonstrated that the use of the FFF beam for lung SABR plan provided better treatment efficiency relative to 6-MV FF beam. This strategy should be particularly beneficial for high dose conformity to the lung and decreased intra-fraction movements because of

  2. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  3. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  4. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  5. SU-D-BRB-07: Lipiodol Impact On Dose Distribution in Liver SBRT After TACE

    SciTech Connect

    Kawahara, D; Ozawa, S; Hioki, K; Suzuki, T; Lin, Y; Okumura, T; Ochi, Y; Nakashima, T; Ohno, Y; Kimura, T; Murakami, Y; Nagata, Y

    2015-06-15

    Purpose: Stereotactic body radiotherapy (SBRT) combining transarterial chemoembolization (TACE) with Lipiodol is expected to improve local control. This study aims to evaluate the impact of Lipiodol on dose distribution by comparing the dosimetric performance of the Acuros XB (AXB) algorithm, anisotropic analytical algorithm (AAA), and Monte Carlo (MC) method using a virtual heterogeneous phantom and a treatment plan for liver SBRT after TACE. Methods: The dose distributions calculated using AAA and AXB algorithm, both in Eclipse (ver. 11; Varian Medical Systems, Palo Alto, CA), and EGSnrc-MC were compared. First, the inhomogeneity correction accuracy of the AXB algorithm and AAA was evaluated by comparing the percent depth dose (PDD) obtained from the algorithms with that from the MC calculations using a virtual inhomogeneity phantom, which included water and Lipiodol. Second, the dose distribution of a liver SBRT patient treatment plan was compared between the calculation algorithms. Results In the virtual phantom, compared with the MC calculations, AAA underestimated the doses just before and in the Lipiodol region by 5.1% and 9.5%, respectively, and overestimated the doses behind the region by 6.0%. Furthermore, compared with the MC calculations, the AXB algorithm underestimated the doses just before and in the Lipiodol region by 4.5% and 10.5%, respectively, and overestimated the doses behind the region by 4.2%. In the SBRT plan, the AAA and AXB algorithm underestimated the maximum doses in the Lipiodol region by 9.0% in comparison with the MC calculations. In clinical cases, the dose enhancement in the Lipiodol region can approximately 10% increases in tumor dose without increase of dose to normal tissue. Conclusion: The MC method demonstrated a larger increase in the dose in the Lipiodol region than the AAA and AXB algorithm. Notably, dose enhancement were observed in the tumor area; this may lead to a clinical benefit.

  6. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  7. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  8. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  9. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  10. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  11. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  12. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  13. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  14. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  15. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  16. A Simple Calculator Algorithm.

    ERIC Educational Resources Information Center

    Cook, Lyle; McWilliam, James

    1983-01-01

    The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)

  17. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  18. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  19. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  20. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  1. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  2. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  3. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  4. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  5. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  6. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  7. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  8. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  9. Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams

    NASA Astrophysics Data System (ADS)

    Vassiliev, Oleg N.; Wareing, Todd A.; McGhee, John; Failla, Gregory; Salehpour, Mohammad R.; Mourtada, Firas

    2010-02-01

    A new grid-based Boltzmann equation solver, Acuros™, was developed specifically for performing accurate and rapid radiotherapy dose calculations. In this study we benchmarked its performance against Monte Carlo for 6 and 18 MV photon beams in heterogeneous media. Acuros solves the coupled Boltzmann transport equations for neutral and charged particles on a locally adaptive Cartesian grid. The Acuros solver is an optimized rewrite of the general purpose Attila© software, and for comparable accuracy levels, it is roughly an order of magnitude faster than Attila. Comparisons were made between Monte Carlo (EGSnrc) and Acuros for 6 and 18 MV photon beams impinging on a slab phantom comprising tissue, bone and lung materials. To provide an accurate reference solution, Monte Carlo simulations were run to a tight statistical uncertainty (σ ≈ 0.1%) and fine resolution (1-2 mm). Acuros results were output on a 2 mm cubic voxel grid encompassing the entire phantom. Comparisons were also made for a breast treatment plan on an anthropomorphic phantom. For the slab phantom in regions where the dose exceeded 10% of the maximum dose, agreement between Acuros and Monte Carlo was within 2% of the local dose or 1 mm distance to agreement. For the breast case, agreement was within 2% of local dose or 2 mm distance to agreement in 99.9% of voxels where the dose exceeded 10% of the prescription dose. Elsewhere, in low dose regions, agreement for all cases was within 1% of the maximum dose. Since all Acuros calculations required less than 5 min on a dual-core two-processor workstation, it is efficient enough for routine clinical use. Additionally, since Acuros calculation times are only weakly dependent on the number of beams, Acuros may ideally be suited to arc therapies, where current clinical algorithms may incur long calculation times.

  10. Robotic Follow Algorithm

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  11. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  12. General cardinality genetic algorithms

    PubMed

    Koehler; Bhattacharyya; Vose

    1997-01-01

    A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767

  13. The Lure of Algorithms

    ERIC Educational Resources Information Center

    Drake, Michael

    2011-01-01

    One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…

  14. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  15. Synthesis, crystal structure investigation and magnetism of the complex metal-rich boride series Crx(Rh1-yRuy)7-xB3 (x=0.88-1; y=0-1) with Th7Fe3-type structure

    NASA Astrophysics Data System (ADS)

    Misse, Patrick R. N.; Mbarki, Mohammed; Fokwa, Boniface P. T.

    2012-08-01

    Powder samples and single crystals of the new complex boride series Crx(Rh1-yRuy)7-xB3 (x=0.88-1; y=0-1) have been synthesized by arc-melting the elements under purified argon atmosphere on a water-cooled copper crucible. The products, which have metallic luster, were structurally characterized by single-crystal and powder X-ray diffraction as well as EDX measurements. Within the whole solid solution range the hexagonal Th7Fe3 structure type (space group P63mc, no. 186, Z=2) was identified. Single-crystal structure refinement results indicate the presence of chromium at two sites (6c and 2b) of the available three metal Wyckoff sites, with a pronounced preference for the 6c site. An unexpected Rh/Ru site preference was found in the Ru-rich region only, leading to two different magnetic behaviors in the solid solution: The Rh-rich region shows a temperature-independent (Pauli) paramagnetism whereas an additional temperature-dependent paramagnetic component is found in the Ru-rich region.

  16. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  17. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  18. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  19. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  20. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  1. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1989-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.

  2. The Relegation Algorithm

    NASA Astrophysics Data System (ADS)

    Deprit, André; Palacián, Jesúus; Deprit, Etienne

    2001-03-01

    The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.

  3. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  4. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  5. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  6. SPA: Solar Position Algorithm

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Andreas, Afshin

    2015-04-01

    The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.

  7. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  8. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  9. Sarsat location algorithms

    NASA Astrophysics Data System (ADS)

    Nardi, Jerry

    The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.

  10. Algorithms for builder guidelines

    SciTech Connect

    Balcomb, J.D.; Lekov, A.B.

    1989-06-01

    The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.

  11. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  12. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  13. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766

  14. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  15. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  16. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  17. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  18. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  19. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  20. Polynomial Algorithms for Item Matching.

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Jones, Douglas H.

    1992-01-01

    Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)

  1. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  2. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  3. Efficient multicomponent fuel algorithm

    NASA Astrophysics Data System (ADS)

    Torres, D. J.; O'Rourke, P. J.; Amsden, A. A.

    2003-03-01

    We derive equations for multicomponent fuel evaporation in airborne fuel droplets and wall films, and implement the model into KIVA-3V. Temporal and spatial variations in liquid droplet composition and temperature are not modelled but solved for by discretizing the interior of the droplet in an implicit and computationally efficient way. We find that an interior discretization is necessary to correctly compute the evolution of the droplet composition. The details of the one-dimensional numerical algorithm are described. Numerical simulations of multicomponent evaporation are performed for single droplets and compared to experimental data.

  4. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  5. Join-Graph Propagation Algorithms

    PubMed Central

    Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina

    2010-01-01

    The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057

  6. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  7. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  8. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  9. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2003-12-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  10. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2004-01-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  11. SU-E-J-58: Dosimetric Verification of Metal Artifact Effects: Comparison of Dose Distributions Affected by Patient Teeth and Implants

    SciTech Connect

    Lee, M; Kang, S; Lee, S; Suh, T; Lee, J; Park, J; Park, H; Lee, B

    2014-06-01

    Purpose: Implant-supported dentures seem particularly appropriate for the predicament of becoming edentulous and cancer patients are no exceptions. As the number of people having dental implants increased in different ages, critical dosimetric verification of metal artifact effects are required for the more accurate head and neck radiation therapy. The purpose of this study is to verify the theoretical analysis of the metal(streak and dark) artifact, and to evaluate dosimetric effect which cause by dental implants in CT images of patients with the patient teeth and implants inserted humanoid phantom. Methods: The phantom comprises cylinder which is shaped to simulate the anatomical structures of a human head and neck. Through applying various clinical cases, made phantom which is closely allied to human. Developed phantom can verify two classes: (i)closed mouth (ii)opened mouth. RapidArc plans of 4 cases were created in the Eclipse planning system. Total dose of 2000 cGy in 10 fractions is prescribed to the whole planning target volume (PTV) using 6MV photon beams. Acuros XB (AXB) advanced dose calculation algorithm, Analytical Anisotropic Algorithm (AAA) and progressive resolution optimizer were used in dose optimization and calculation. Results: In closed and opened mouth phantom, because dark artifacts formed extensively around the metal implants, dose variation was relatively higher than that of streak artifacts. As the PTV was delineated on the dark regions or large streak artifact regions, maximum 7.8% dose error and average 3.2% difference was observed. The averaged minimum dose to the PTV predicted by AAA was about 5.6% higher and OARs doses are also 5.2% higher compared to AXB. Conclusion: The results of this study showed that AXB dose calculation involving high-density materials is more accurate than AAA calculation, and AXB was superior to AAA in dose predictions beyond dark artifact/air cavity portion when compared against the measurements.

  12. SU-E-T-280: Reconstructed Rectal Wall Dose Map-Based Verification of Rectal Dose Sparing Effect According to Rectum Definition Methods and Dose Perturbation by Air Cavity in Endo-Rectal Balloon

    SciTech Connect

    Park, J; Park, H; Lee, J; Kang, S; Lee, M; Suh, T; Lee, B

    2014-06-01

    Purpose: Dosimetric effect and discrepancy according to the rectum definition methods and dose perturbation by air cavity in an endo-rectal balloon (ERB) were verified using rectal-wall (Rwall) dose maps considering systematic errors in dose optimization and calculation accuracy in intensity-modulated radiation treatment (IMRT) for prostate cancer patients. Methods: When the inflated ERB having average diameter of 4.5 cm and air volume of 100 cc is used for patient, Rwall doses were predicted by pencil-beam convolution (PBC), anisotropic analytic algorithm (AAA), and AcurosXB (AXB) with material assignment function. The errors of dose optimization and calculation by separating air cavity from the whole rectum (Rwhole) were verified with measured rectal doses. The Rwall doses affected by the dose perturbation of air cavity were evaluated using a featured rectal phantom allowing insert of rolled-up gafchromic films and glass rod detectors placed along the rectum perimeter. Inner and outer Rwall doses were verified with reconstructed predicted rectal wall dose maps. Dose errors and extent at dose levels were evaluated with estimated rectal toxicity. Results: While AXB showed insignificant difference of target dose coverage, Rwall doses underestimated by up to 20% in dose optimization for the Rwhole than Rwall at all dose range except for the maximum dose. As dose optimization for Rwall was applied, the Rwall doses presented dose error less than 3% between dose calculation algorithm except for overestimation of maximum rectal dose up to 5% in PBC. Dose optimization for Rwhole caused dose difference of Rwall especially at intermediate doses. Conclusion: Dose optimization for Rwall could be suggested for more accurate prediction of rectal wall dose prediction and dose perturbation effect by air cavity in IMRT for prostate cancer. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea

  13. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  14. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  15. Fighting Censorship with Algorithms

    NASA Astrophysics Data System (ADS)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  16. Trial encoding algorithms ensemble.

    PubMed

    Cheng, Lipin Bill; Yeh, Ren Jye

    2013-01-01

    This paper proposes trial algorithms for some basic components in cryptography and lossless bit compression. The symmetric encryption is accomplished by mixing up randomizations and scrambling with hashing of the key playing an essential role. The digital signature is adapted from the Hill cipher with the verification key matrices incorporating un-invertible parts to hide the signature matrix. The hash is a straight running summation (addition chain) of data bytes plus some randomization. One simplified version can be burst error correcting code. The lossless bit compressor is the Shannon-Fano coding that is less optimal than the later Huffman and Arithmetic coding, but can be conveniently implemented without the use of a tree structure and improvable with bytes concatenation. PMID:27057475

  17. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  18. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  19. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Astrophysics Data System (ADS)

    Bahethi, O. P.

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  20. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  1. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  2. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  3. Preconditioned quantum linear system algorithm.

    PubMed

    Clader, B D; Jacobs, B C; Sprouse, C R

    2013-06-21

    We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722

  4. Variable Selection using MM Algorithms

    PubMed Central

    Hunter, David R.; Li, Runze

    2009-01-01

    Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests. PMID:19458786

  5. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  6. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  7. Advanced software algorithms

    SciTech Connect

    Berry, K.; Dayton, S.

    1996-10-28

    Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.

  8. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  9. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  10. Cascade Error Projection Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  11. The Chopthin Algorithm for Resampling

    NASA Astrophysics Data System (ADS)

    Gandy, Axel; Lau, F. Din-Houn

    2016-08-01

    Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.

  12. CORDIC algorithms in four dimensions

    NASA Astrophysics Data System (ADS)

    Delosme, Jean-Marc; Hsiao, Shen-Fu

    1990-11-01

    CORDIC algorithms offer an attractive alternative to multiply-and-add based algorithms for the implementation of two-dimensional rotations preserving either norm: (x2 + 2) or (x2 _ y2)/2 Indeed these norms whose computation is a significant part of the evaluation of the two-dimensional rotations are computed much more easily by the CORDIC algorithms. However the part played by norm computations in the evaluation of rotations becomes quickly small as the dimension of the space increases. Thus in spaces of dimension 5 or more there is no practical alternative to multiply-and-add based algorithms. In the intermediate region dimensions 3 and 4 extensions of the CORDIC algorithms are an interesting option. The four-dimensional extensions are particularly elegant and are the main object of this paper.

  13. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  14. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  15. An Artificial Immune Univariate Marginal Distribution Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping

    Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.

  16. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  17. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  18. Wavelet periodicity detection algorithms

    NASA Astrophysics Data System (ADS)

    Benedetto, John J.; Pfander, Goetz E.

    1998-10-01

    This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.

  19. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  20. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  1. Cluster algorithms and computational complexity

    NASA Astrophysics Data System (ADS)

    Li, Xuenan

    Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.

  2. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  3. Linearization algorithms for line transfer

    SciTech Connect

    Scott, H.A.

    1990-11-06

    Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.

  4. Fibonacci Numbers and Computer Algorithms.

    ERIC Educational Resources Information Center

    Atkins, John; Geist, Robert

    1987-01-01

    The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)

  5. An onboard star identification algorithm

    NASA Astrophysics Data System (ADS)

    Ha, Kong; Femiano, Michael

    The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.

  6. Scheduling Jobs with Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ferrolho, António; Crisóstomo, Manuel

    Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.

  7. Recursive Algorithm For Linear Regression

    NASA Technical Reports Server (NTRS)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  8. Algorithmic complexity of a protein

    NASA Astrophysics Data System (ADS)

    Dewey, T. Gregory

    1996-07-01

    The information contained in a protein's amino acid sequence dictates its three-dimensional structure. To quantitate the transfer of information that occurs in the protein folding process, the Kolmogorov information entropy or algorithmic complexity of the protein structure is investigated. The algorithmic complexity of an object provides a means of quantitating its information content. Recent results have indicated that the algorithmic complexity of microstates of certain statistical mechanical systems can be estimated from the thermodynamic entropy. In the present work, it is shown that the algorithmic complexity of a protein is given by its configurational entropy. Using this result, a quantitative estimate of the information content of a protein's structure is made and is compared to the information content of the sequence. Additionally, the mutual information between sequence and structure is determined. It is seen that virtually all the information contained in the protein structure is shared with the sequence.

  9. An onboard star identification algorithm

    NASA Technical Reports Server (NTRS)

    Ha, Kong; Femiano, Michael

    1993-01-01

    The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.

  10. Cascade Error Projection: A New Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  11. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  12. Genetic algorithms as discovery programs

    SciTech Connect

    Hilliard, M.R.; Liepins, G.

    1986-01-01

    Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.

  13. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  14. Fully relativistic lattice Boltzmann algorithm

    SciTech Connect

    Romatschke, P.; Mendoza, M.; Succi, S.

    2011-09-15

    Starting from the Maxwell-Juettner equilibrium distribution, we develop a relativistic lattice Boltzmann (LB) algorithm capable of handling ultrarelativistic systems with flat, but expanding, spacetimes. The algorithm is validated through simulations of a quark-gluon plasma, yielding excellent agreement with hydrodynamic simulations. The present scheme opens the possibility of transferring the recognized computational advantages of lattice kinetic theory to the context of both weakly and ultrarelativistic systems.

  15. High-speed CORDIC algorithm

    NASA Astrophysics Data System (ADS)

    El-Guibaly, Fayez; Sabaa, A.

    1996-10-01

    In this paper, we introduce modifications on the classic CORDIC algorithm to reduce the number of iterations, and hence the rounding noise. The modified algorithm needs, at most, half the number of iterations to achieve the same accuracy as the classical one. The modifications are applicable to linear, circular and hyperbolic CORDIC in both vectoring and rotation modes. Simulations illustrate the effect of the new modifications.

  16. Localization algorithm for acoustic emission

    NASA Astrophysics Data System (ADS)

    Salinas, V.; Vargas, Y.; Ruzzante, J.; Gaete, L.

    2010-01-01

    In this paper, an iterative algorithm for localization of acoustic emission (AE) source is presented. The main advantage of the system is that it is independent of the 'ability' in the determination of signal level to triggering the signal by the researcher. The system was tested in cylindrical samples with an AE localized in a known position; the precision in the source determination was of about 2 mm, better than the precision obtained with classic localization algorithms (˜1 cm).

  17. CORDIC Algorithms: Theory And Extensions

    NASA Astrophysics Data System (ADS)

    Delosme, Jean-Marc

    1989-11-01

    Optimum algorithms for signal processing are notoriously costly to implement since they usually require intensive linear algebra operations to be performed at very high rates. In these cases a cost-effective solution is to design a pipelined or parallel architecture with special-purpose VLSI processors. One may often lower the hardware cost of such a dedicated architecture by using processors that implement CORDIC-like arithmetic algorithms. Indeed, with CORDIC algorithms, the evaluation and the application of an operation, such as determining a rotation that brings a vector onto another one and rotating other vectors by that amount, require the same time on identical processors and can be fully overlapped in most cases, thus leading to highly efficient implementations. We have shown earlier that a necessary condition for a CORDIC-type algorithm to exist is that the function to be implemented can be represented in terms of a matrix exponential. This paper refines this condition to the ability to represent , the desired function in terms of a rational representation of a matrix exponential. This insight gives us a powerful tool for the design of new CORDIC algorithms. This is demonstrated by rederiving classical CORDIC algorithms and introducing several new ones, for Jacobi rotations, three and higher dimensional rotations, etc.

  18. Multithreaded Algorithms for Graph Coloring

    SciTech Connect

    Catalyurek, Umit V.; Feo, John T.; Gebremedhin, Assefaw H.; Halappanavar, Mahantesh; Pothen, Alex

    2012-10-21

    Graph algorithms are challenging to parallelize when high performance and scalability are primary goals. Low concurrency, poor data locality, irregular access pattern, and high data access to computation ratio are among the chief reasons for the challenge. The performance implication of these features is exasperated on distributed memory machines. More success is being achieved on shared-memory, multi-core architectures supporting multithreading. We consider a prototypical graph problem, coloring, and show how a greedy algorithm for solving it can be e*ectively parallelized on multithreaded architectures. We present in particular two di*erent parallel algorithms. The first relies on speculation and iteration, and is suitable for any shared-memory, multithreaded system. The second uses data ow principles and is targeted at the massively multithreaded Cray XMT system. We benchmark the algorithms on three di*erent platforms and demonstrate scalable runtime performance. In terms of quality of solution, both algorithms use nearly the same number of colors as the serial algorithm.

  19. The value of care algorithms.

    PubMed

    Myers, Timothy

    2006-09-01

    The use of protocols or care algorithms in medical facilities has increased in the managed care environment. The definition and application of care algorithms, with a particular focus on the treatment of acute bronchospasm, are explored in this review. The benefits and goals of using protocols, especially in the treatment of asthma, to standardize patient care based on clinical guidelines and evidence-based medicine are explained. Ideally, evidence-based protocols should translate research findings into best medical practices that would serve to better educate patients and their medical providers who are administering these protocols. Protocols should include evaluation components that can monitor, through some mechanism of quality assurance, the success and failure of the instrument so that modifications can be made as necessary. The development and design of an asthma care algorithm can be accomplished by using a four-phase approach: phase 1, identifying demographics, outcomes, and measurement tools; phase 2, reviewing, negotiating, and standardizing best practice; phase 3, testing and implementing the instrument and collecting data; and phase 4, analyzing the data and identifying areas of improvement and future research. The experiences of one medical institution that implemented an asthma care algorithm in the treatment of pediatric asthma are described. Their care algorithms served as tools for decision makers to provide optimal asthma treatment in children. In addition, the studies that used the asthma care algorithm to determine the efficacy and safety of ipratropium bromide and levalbuterol in children with asthma are described. PMID:16945065

  20. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  1. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  2. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  3. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  4. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  5. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  6. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  7. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  8. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  9. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs). PMID:23599053

  10. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  11. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  12. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  13. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  14. Algorithms, complexity, and the sciences.

    PubMed

    Papadimitriou, Christos

    2014-11-11

    Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  15. Projection Classification Based Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqiu; Li, Chen; Gao, Wenhua

    2015-05-01

    Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.

  16. Firefly Algorithm for Structural Search.

    PubMed

    Avendaño-Franco, Guillermo; Romero, Aldo H

    2016-07-12

    The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers. PMID:27232694

  17. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  18. Seamless Merging of Hypertext and Algorithm Animation

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  19. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  20. Decryption of pure-position permutation algorithms.

    PubMed

    Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang

    2004-07-01

    Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown. PMID:15495308

  1. Old And New Algorithms For Toeplitz Systems

    NASA Astrophysics Data System (ADS)

    Brent, Richard P.

    1988-02-01

    Toeplitz linear systems and Toeplitz least squares problems commonly arise in digital signal processing. In this paper we survey some old, "well known" algorithms and some recent algorithms for solving these problems. We concentrate our attention on algorithms which can be implemented efficiently on a variety of parallel machines (including pipelined vector processors and systolic arrays). We distinguish between algorithms which require inner products, and algorithms which avoid inner products, and thus are better suited to parallel implementation on some parallel architectures. Finally, we mention some "asymptotically fast" 0(n(log n)2) algorithms and compare them with 0(n2) algorithms.

  2. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  3. Squint mode SAR processing algorithms

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Jin, M.; Curlander, J. C.

    1989-01-01

    The unique characteristics of a spaceborne SAR (synthetic aperture radar) operating in a squint mode include large range walk and large variation in the Doppler centroid as a function of range. A pointing control technique to reduce the Doppler drift and a new processing algorithm to accommodate large range walk are presented. Simulations of the new algorithm for squint angles up to 20 deg and look angles up to 44 deg for the Earth Observing System (Eos) L-band SAR configuration demonstrate that it is capable of maintaining the resolution broadening within 20 percent and the ISLR within a fraction of a decibel of the theoretical value.

  4. Fast algorithms for transport models

    SciTech Connect

    Manteuffel, T.A.

    1992-12-01

    The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).

  5. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  6. Born approximation, scattering, and algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun

    2015-05-01

    In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.

  7. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  8. Two Algorithms for Processing Electronic Nose Data

    NASA Technical Reports Server (NTRS)

    Young, Rebecca; Linnell, Bruce

    2007-01-01

    Two algorithms for processing the digitized readings of electronic noses, and computer programs to implement the algorithms, have been devised in a continuing effort to increase the utility of electronic noses as means of identifying airborne compounds and measuring their concentrations. One algorithm identifies the two vapors in a two-vapor mixture and estimates the concentration of each vapor (in principle, this algorithm could be extended to more than two vapors). The other algorithm identifies a single vapor and estimates its concentration.

  9. Blind Alley Aware ACO Routing Algorithm

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Masaya; Otani, Kazuo

    2010-10-01

    The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.

  10. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  11. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  12. Algorithms, complexity, and the sciences

    PubMed Central

    Papadimitriou, Christos

    2014-01-01

    Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  13. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  14. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  15. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  16. Simultaneous stabilization using genetic algorithms

    SciTech Connect

    Benson, R.W.; Schmitendorf, W.E. . Dept. of Mechanical Engineering)

    1991-01-01

    This paper considers the problem of simultaneously stabilizing a set of plants using full state feedback. The problem is converted to a simple optimization problem which is solved by a genetic algorithm. Several examples demonstrate the utility of this method. 14 refs., 8 figs.

  17. Detection Algorithms: FFT vs. KLT

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    Given the vast distances between the stars, we can anticipate that any received SETI signal will be exceedingly weak. How can we hope to extract (or even recognize) such signals buried well beneath the natural background noise with which they must compete? This chapter analyzes, compares, and contrasts the two dominant signal detection algorithms used by SETI scientists to recognize extremely weak candidate signals.

  18. Adaptive protection algorithm and system

    DOEpatents

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  19. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  20. Coagulation algorithms with size binning

    NASA Technical Reports Server (NTRS)

    Statton, David M.; Gans, Jason; Williams, Eric

    1994-01-01

    The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.

  1. Nuclear models and exact algorithms

    NASA Astrophysics Data System (ADS)

    Bes, D. R.; Dobaczewski, J.; Draayer, J. P.; Szymański, Z.

    1992-07-01

    Discussion Group E on Nuclear Models and Exact Algorithms received contributions from the following individuals: L. Egido, S. Frauendorf, F. Iachello, P. Ring, H. Sagawa, W. Satula, N. C. Schmeing, M. Vincent, A. J. Zucker. The report that follows is an attempt by the leaders of the discussion to summarize the presentations and to give an impression of the subject matter.

  2. SMAP's Radar OBP Algorithm Development

    NASA Technical Reports Server (NTRS)

    Le, Charles; Spencer, Michael W.; Veilleux, Louise; Chan, Samuel; He, Yutao; Zheng, Jason; Nguyen, Kayla

    2009-01-01

    An approach for algorithm specifications and development is described for SMAP's radar onboard processor with multi-stage demodulation and decimation bandpass digital filter. Point target simulation is used to verify and validate the filter design with the usual radar performance parameters. Preliminary FPGA implementation is also discussed.

  3. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  4. Quartic Rotation Criteria and Algorithms.

    ERIC Educational Resources Information Center

    Clarkson, Douglas B.; Jennrich, Robert I.

    1988-01-01

    Most of the current analytic rotation criteria for simple structure in factor analysis are summarized and identified as members of a general symmetric family of quartic criteria. A unified development of algorithms for orthogonal and direct oblique rotation using arbitrary criteria from this family is presented. (Author/TJH)

  5. Key Concepts in Informatics: Algorithm

    ERIC Educational Resources Information Center

    Szlávi, Péter; Zsakó, László

    2014-01-01

    "The system of key concepts contains the most important key concepts related to the development tasks of knowledge areas and their vertical hierarchy as well as the links of basic key concepts of different knowledge areas." (Vass 2011) One of the most important of these concepts is the algorithm. In everyday life, when learning or…

  6. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  7. SU-F-BRD-15: The Impact of Dose Calculation Algorithm and Hounsfield Units Conversion Tables On Plan Dosimetry for Lung SBRT

    SciTech Connect

    Kuo, L; Yorke, E; Lim, S; Mechalakos, J; Rimner, A

    2014-06-15

    Purpose: To assess dosimetric differences in IMRT lung stereotactic body radiotherapy (SBRT) plans calculated with Varian AAA and Acuros (AXB) and with vendor-supplied (V) versus in-house (IH) measured Hounsfield units (HU) to mass and HU to electron density conversion tables. Methods: In-house conversion tables were measured using Gammex 472 density-plug phantom. IMRT plans (6 MV, Varian TrueBeam, 6–9 coplanar fields) meeting departmental coverage and normal tissue constraints were retrospectively generated for 10 lung SBRT cases using Eclipse Vn 10.0.28 AAA with in-house tables (AAA/IH). Using these monitor units and MLC sequences, plans were recalculated with AAA and vendor tables (AAA/V) and with AXB with both tables (AXB/IH and AXB/V). Ratios to corresponding AAA/IH values were calculated for PTV D95, D01, D99, mean-dose, total and ipsilateral lung V20 and chestwall V30. Statistical significance of differences was judged by Wilcoxon Signed Rank Test (p<0.05). Results: For HU<−400 the vendor HU-mass density table was notably below the IH table. PTV D95 ratios to AAA/IH, averaged over all patients, are 0.963±0.073 (p=0.508), 0.914±0.126 (p=0.011), and 0.998±0.001 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. Total lung V20 ratios are 1.006±0.046 (p=0.386), 0.975±0.080 (p=0.514) and 0.998±0.002 (p=0.007); ipsilateral lung V20 ratios are 1.008±0.041(p=0.284), 0.977±0.076 (p=0.443), and 0.998±0.018 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. In 7 cases, ratios to AAA/IH were within ± 5% for all indices studied. For 3 cases characterized by very low lung density and small PTV (19.99±8.09 c.c.), PTV D95 ratio for AXB/V ranged from 67.4% to 85.9%, AXB/IH D95 ratio ranged from 81.6% to 93.4%; there were large differences in other studied indices. Conclusion: For AXB users, careful attention to HU conversion tables is important, as they can significantly impact AXB (but not AAA) lung SBRT plans. Algorithm selection is also important for

  8. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  9. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  10. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  11. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 ; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  12. Systolic algorithms and their implementation

    SciTech Connect

    Kung, H.T.

    1984-01-01

    Very high performance computer systems must rely heavily on parallelism since there are severe physical and technological limits on the ultimate speed of any single processor. The systolic array concept developed in the last several years allows effective use of a very large number of processors in parallel. This article illustrates the basic ideas by reviewing a systolic array design for matrix triangularization and describing its use in the on-the-fly updating of Cholesky decomposition of covariance matrices-a crucial computation in adaptive signal processing. Following this are discussions on issues related to the hardware implementation of systolic algorithms in general, and some guidelines for designing systolic algorithms that will be convenient for implementation. 33 references.

  13. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  14. An NOy* Algorithm for SOLVE

    NASA Technical Reports Server (NTRS)

    Loewenstein, M.; Greenblatt. B. J.; Jost, H.; Podolske, J. R.; Elkins, Jim; Hurst, Dale; Romanashkin, Pavel; Atlas, Elliott; Schauffler, Sue; Donnelly, Steve; Condon, Estelle (Technical Monitor)

    2000-01-01

    De-nitrification and excess re-nitrification was widely observed by ER-2 instruments in the Arctic vortex during SOLVE in winter/spring 2000. Analyses of these events requires a knowledge of the initial or pre-vortex state of the sampled air masses. The canonical relationship of NOy to the long-lived tracer N2O observed in the unperturbed stratosphere is generally used for this purpose. In this paper we will attempt to establish the current unperturbed NOy:N2O relationship (NOy* algorithm) using the ensemble of extra-vortex data from in situ instruments flying on the ER-2 and DC-8, and from the Mark IV remote measurements on the OMS balloon. Initial analysis indicates a change in the SOLVE NOy* from the values predicted by the 1994 Northern Hemisphere NOy* algorithm which was derived from the observations in the ASHOE/MAESA campaign.

  15. A spectral canonical electrostatic algorithm

    NASA Astrophysics Data System (ADS)

    Webb, Stephen D.

    2016-03-01

    Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton’s principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm’s energy- and momentum-conserving properties.

  16. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  17. Constrained multiobjective biogeography optimization algorithm.

    PubMed

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  18. Innovations in Lattice QCD Algorithms

    SciTech Connect

    Konstantinos Orginos

    2006-06-25

    Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.

  19. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  20. Optimisation algorithms for microarray biclustering.

    PubMed

    Perrin, Dimitri; Duhamel, Christophe

    2013-01-01

    In providing simultaneous information on expression profiles for thousands of genes, microarray technologies have, in recent years, been largely used to investigate mechanisms of gene expression. Clustering and classification of such data can, indeed, highlight patterns and provide insight on biological processes. A common approach is to consider genes and samples of microarray datasets as nodes in a bipartite graphs, where edges are weighted e.g. based on the expression levels. In this paper, using a previously-evaluated weighting scheme, we focus on search algorithms and evaluate, in the context of biclustering, several variations of Genetic Algorithms. We also introduce a new heuristic "Propagate", which consists in recursively evaluating neighbour solutions with one more or one less active conditions. The results obtained on three well-known datasets show that, for a given weighting scheme, optimal or near-optimal solutions can be identified. PMID:24109756

  1. A possible hypercomputational quantum algorithm

    NASA Astrophysics Data System (ADS)

    Sicard, Andres; Velez, Mario; Ospina, Juan

    2005-05-01

    The term 'hypermachine' denotes any data processing device (theoretical or that can be implemented) capable of carrying out tasks that cannot be performed by a Turing machine. We present a possible quantum algorithm for a classically non-computable decision problem, Hilbert's tenth problem; more specifically, we present a possible hypercomputation model based on quantum computation. Our algorithm is inspired by the one proposed by Tien D. Kieu, but we have selected the infinite square well instead of the (one-dimensional) simple harmonic oscillator as the underlying physical system. Our model exploits the quantum adiabatic process and the characteristics of the representation of the dynamical Lie algebra su(1,1) associated to the infinite square well.

  2. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  3. Systolic systems: algorithms and complexity

    SciTech Connect

    Chang, J.H.

    1986-01-01

    This thesis has two main contributions. The first is the design of efficient systolic algorithms for solving recurrence equations, dynamic programming problems, scheduling problems, as well as new systolic implementation of data structures such as stacks, queues, priority queues, and dictionary machines. The second major contribution is the investigation of the computational power of systolic arrays in comparison to sequential models and other models of parallel computation.

  4. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  5. Algorithms of NCG geometrical module

    NASA Astrophysics Data System (ADS)

    Gurevich, M. I.; Pryanichnikov, A. V.

    2012-12-01

    The methods and algorithms of the versatile NCG geometrical module used in the MCU code system are described. The NCG geometrical module is based on the Monte Carlo method and intended for solving equations of particle transport. The versatile combinatorial body method, the grid method, and methods of equalized cross sections and grain structures are used for description of the system geometry and calculation of trajectories.

  6. Algorithms of NCG geometrical module

    SciTech Connect

    Gurevich, M. I.; Pryanichnikov, A. V.

    2012-12-15

    The methods and algorithms of the versatile NCG geometrical module used in the MCU code system are described. The NCG geometrical module is based on the Monte Carlo method and intended for solving equations of particle transport. The versatile combinatorial body method, the grid method, and methods of equalized cross sections and grain structures are used for description of the system geometry and calculation of trajectories.

  7. Computed laminography and reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Que, Jie-Min; Cao, Da-Quan; Zhao, Wei; Tang, Xiao; Sun, Cui-Li; Wang, Yan-Fang; Wei, Cun-Feng; Shi, Rong-Jian; Wei, Long; Yu, Zhong-Qiang; Yan, Yong-Lian

    2012-08-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system.

  8. Efficient algorithms for proximity problems

    SciTech Connect

    Wee, Y.C.

    1989-01-01

    Computational geometry is currently a very active area of research in computer science because of its applications to VLSI design, database retrieval, robotics, pattern recognition, etc. The author studies a number of proximity problems which are fundamental in computational geometry. Optimal or improved sequential and parallel algorithms for these problems are presented. Along the way, some relations among the proximity problems are also established. Chapter 2 presents an O(N log{sup 2} N) time divide-and-conquer algorithm for solving the all pairs geographic nearest neighbors problem (GNN) for a set of N sites in the plane under any L{sub p} metric. Chapter 3 presents an O(N log N) divide-and-conquer algorithm for computing the angle restricted Voronoi diagram for a set of N sites in the plane. Chapter 4 introduces a new data structure for the dynamic version of GNN. Chapter 5 defines a new formalism called the quasi-valid range aggregation. This formalism leads to a new and simple method for reducing non-range query-like problems to range queries and often to orthogonal range queries, with immediate applications to the attracted neighbor and the planar all-pairs nearest neighbors problem. Chapter 6 introduces a new approach for the construction of the Voronoi diagram. Using this approach, we design an O(log N) time O (N) processor algorithm for constructing the Voronoi diagram with L{sub 1} and L. metrics on a CREW PRAM machine. Even though the GNN and the Delaunay triangulation (DT) do not have an inclusion relation, we show, using some range type queries, how to efficiently construct DT from the GNN relations over a constant number of angular ranges.

  9. Algorithm Helps Monitor Engine Operation

    NASA Technical Reports Server (NTRS)

    Eckerling, Sherry J.; Panossian, Hagop V.; Kemp, Victoria R.; Taniguchi, Mike H.; Nelson, Richard L.

    1995-01-01

    Real-Time Failure Control (RTFC) algorithm part of automated monitoring-and-shutdown system being developed to ensure safety and prevent major damage to equipment during ground tests of main engine of space shuttle. Includes redundant sensors, controller voting logic circuits, automatic safe-limit logic circuits, and conditional-decision logic circuits, all monitored by human technicians. Basic principles of system also applicable to stationary powerplants and other complex machinery systems.

  10. Algorithmic Strategies in Combinatorial Chemistry

    SciTech Connect

    GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN

    2000-08-01

    Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

  11. Algorithm validation using multicolor phantoms.

    PubMed

    Samarov, Daniel V; Clarke, Matthew L; Lee, Ji Youn; Allen, David W; Litorja, Maritoni; Hwang, Jeeseong

    2012-06-01

    We present a framework for hyperspectral image (HSI) analysis validation, specifically abundance fraction estimation based on HSI measurements of water soluble dye mixtures printed on microarray chips. In our work we focus on the performance of two algorithms, the Least Absolute Shrinkage and Selection Operator (LASSO) and the Spatial LASSO (SPLASSO). The LASSO is a well known statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundance fractions in a HSI scene, the "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The SPLASSO is a novel approach we introduce here for HSI analysis which takes the framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. In our work here we introduce the dye mixture platform as a new benchmark data set for hyperspectral biomedical image processing and show our algorithm's improvement over the standard LASSO. PMID:22741077

  12. A novel stochastic optimization algorithm.

    PubMed

    Li, B; Jiang, W

    2000-01-01

    This paper presents a new stochastic approach SAGACIA based on proper integration of simulated annealing algorithm (SAA), genetic algorithm (GA), and chemotaxis algorithm (CA) for solving complex optimization problems. SAGACIA combines the advantages of SAA, GA, and CA together. It has the following features: (1) it is not the simple mix of SAA, GA, and CA; (2) it works from a population; (3) it can be easily used to solve optimization problems either with continuous variables or with discrete variables, and it does not need coding and decoding,; and (4) it can easily escape from local minima and converge quickly. Good solutions can be obtained in a very short time. The search process of SAGACIA can be explained with Markov chains. In this paper, it is proved that SAGACIA has the property of global asymptotical convergence. SAGACIA has been applied to solve such problems as scheduling, the training of artificial neural networks, and the optimizing of complex functions. In all the test cases, the performance of SAGACIA is better than that of SAA, GA, and CA. PMID:18244742

  13. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  14. An algorithm for generating abstract syntax trees

    NASA Technical Reports Server (NTRS)

    Noonan, R. E.

    1985-01-01

    The notion of an abstract syntax is discussed. An algorithm is presented for automatically deriving an abstract syntax directly from a BNF grammar. The implementation of this algorithm and its application to the grammar for Modula are discussed.

  15. Teaching Multiplication Algorithms from Other Cultures

    ERIC Educational Resources Information Center

    Lin, Cheng-Yao

    2007-01-01

    This article describes a number of multiplication algorithms from different cultures around the world: Hindu, Egyptian, Russian, Japanese, and Chinese. Students can learn these algorithms and better understand the operation and properties of multiplication.

  16. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  17. Algorithms and Requirements for Measuring Network Bandwidth

    SciTech Connect

    Jin, Guojun

    2002-12-08

    This report unveils new algorithms for actively measuring (not estimating) available bandwidths with very low intrusion, computing cross traffic, thus estimating the physical bandwidth, provides mathematical proof that the algorithms are accurate, and addresses conditions, requirements, and limitations for new and existing algorithms for measuring network bandwidths. The paper also discusses a number of important terminologies and issues for network bandwidth measurement, and introduces a fundamental parameter -Maximum Burst Size that is critical for implementing algorithms based on multiple packets.

  18. TVFMCATS. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, R.K.

    1999-05-01

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor`s hardware.

  19. Algorithmic formulation of control problems in manipulation

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.

    1975-01-01

    The basic characteristics of manipulator control algorithms are discussed. The state of the art in the development of manipulator control algorithms is briefly reviewed. Different end-point control techniques are described together with control algorithms which operate on external sensor (imaging, proximity, tactile, and torque/force) signals in realtime. Manipulator control development at JPL is briefly described and illustrated with several figures. The JPL work pays special attention to the front or operator input end of the control algorithms.

  20. Time Variant Floating Mean Counting Algorithm

    1999-06-03

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.

  1. Efficient Algorithm for Rectangular Spiral Search

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Breckenridge, William

    2008-01-01

    An algorithm generates grid coordinates for a computationally efficient spiral search pattern covering an uncertain rectangular area spanned by a coordinate grid. The algorithm does not require that the grid be fixed; the algorithm can search indefinitely, expanding the grid and spiral, as needed, until the target of the search is found. The algorithm also does not require memory of coordinates of previous points on the spiral to generate the current point on the spiral.

  2. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  3. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  4. Algorithmic Processes for Increasing Design Efficiency.

    ERIC Educational Resources Information Center

    Terrell, William R.

    1983-01-01

    Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)

  5. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  6. An algorithm on distributed mining association rules

    NASA Astrophysics Data System (ADS)

    Xu, Fan

    2005-12-01

    With the rapid development of the Internet/Intranet, distributed databases have become a broadly used environment in various areas. It is a critical task to mine association rules in distributed databases. The algorithms of distributed mining association rules can be divided into two classes. One is a DD algorithm, and another is a CD algorithm. A DD algorithm focuses on data partition optimization so as to enhance the efficiency. A CD algorithm, on the other hand, considers a setting where the data is arbitrarily partitioned horizontally among the parties to begin with, and focuses on parallelizing the communication. A DD algorithm is not always applicable, however, at the time the data is generated, it is often already partitioned. In many cases, it cannot be gathered and repartitioned for reasons of security and secrecy, cost transmission, or sheer efficiency. A CD algorithm may be a more appealing solution for systems which are naturally distributed over large expenses, such as stock exchange and credit card systems. An FDM algorithm provides enhancement to CD algorithm. However, CD and FDM algorithms are both based on net-structure and executing in non-shareable resources. In practical applications, however, distributed databases often are star-structured. This paper proposes an algorithm based on star-structure networks, which are more practical in application, have lower maintenance costs and which are more practical in the construction of the networks. In addition, the algorithm provides high efficiency in communication and good extension in parallel computation.

  7. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  8. Verification of IEEE Compliant Subtractive Division Algorithms

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Leathrum, James F., Jr.

    1996-01-01

    A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.

  9. Improvements to the stand and hit algorithm

    SciTech Connect

    Boneh, A.; Boneh, S.; Caron, R.; Jibrin, S.

    1994-12-31

    The stand and hit algorithm is a probabilistic algorithm for detecting necessary constraints. The algorithm stands at a point in the feasible region and hits constraints by moving towards the boundary along randomly generated directions. In this talk we discuss methods for choosing the standing point. As well, we present the undetected first rule for determining the hit constraints.

  10. Parameter incremental learning algorithm for neural networks.

    PubMed

    Wan, Sheng; Banta, Larry E

    2006-11-01

    In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658

  11. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  12. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  13. Color sorting algorithm based on K-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, BaoFeng; Huang, Qian

    2009-11-01

    In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.

  14. Parallelized Dilate Algorithm for Remote Sensing Image

    PubMed Central

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. PMID:24955392

  15. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  16. Efficient demultiplexing algorithm for noncontiguous carriers

    NASA Technical Reports Server (NTRS)

    Thanawala, A. A.; Kwatra, S. C.; Jamali, M. M.; Budinger, J.

    1992-01-01

    A channel separation algorithm for the frequency division multiple access/time division multiplexing (FDMA/TDM) scheme is presented. It is shown that implementation using this algorithm can be more effective than the fast Fourier transform (FFT) algorithm when only a small number of carriers need to be selected from many, such as satellite Earth terminals. The algorithm is based on polyphase filtering followed by application of a generalized Walsh-Hadamard transform (GWHT). Comparison of the transform technique used in this algorithm with discrete Fourier transform (DFT) and FFT is given. Estimates of the computational rates and power requirements to implement this system are also given.

  17. Improved piecewise orthogonal signal correction algorithm.

    PubMed

    Feudale, Robert N; Tan, Huwei; Brown, Steven D

    2003-10-01

    Piecewise orthogonal signal correction (POSC), an algorithm that performs local orthogonal filtering, was recently developed to process spectral signals. POSC was shown to improve partial leastsquares regression models over models built with conventional OSC. However, rank deficiencies within the POSC algorithm lead to artifacts in the filtered spectra when removing two or more POSC components. Thus, an updated OSC algorithm for use with the piecewise procedure is reported. It will be demonstrated how the mathematics of this updated OSC algorithm were derived from the previous version and why some OSC versions may not be as appropriate to use with the piecewise modeling procedure as the algorithm reported here. PMID:14639746

  18. Is there a best hyperspectral detection algorithm?

    NASA Astrophysics Data System (ADS)

    Manolakis, D.; Lockwood, R.; Cooley, T.; Jacobson, J.

    2009-05-01

    A large number of hyperspectral detection algorithms have been developed and used over the last two decades. Some algorithms are based on highly sophisticated mathematical models and methods; others are derived using intuition and simple geometrical concepts. The purpose of this paper is threefold. First, we discuss the key issues involved in the design and evaluation of detection algorithms for hyperspectral imaging data. Second, we present a critical review of existing detection algorithms for practical hyperspectral imaging applications. Finally, we argue that the "apparent" superiority of sophisticated algorithms with simulated data or in laboratory conditions, does not necessarily translate to superiority in real-world applications.

  19. Filtering algorithm for dotted interferences

    NASA Astrophysics Data System (ADS)

    Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.

    2011-09-01

    An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.

  20. Wavelet Algorithms for Illumination Computations

    NASA Astrophysics Data System (ADS)

    Schroder, Peter

    One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.

  1. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  2. Newman-Janis Algorithm Revisited

    NASA Astrophysics Data System (ADS)

    Brauer, O.; Camargo, H. A.; Socolovsky, M.

    2015-01-01

    The purpose of the present article is to show that the Newman-Janis and Newman et al algorithm used to derive the Kerr and Kerr-Newman metrics respectively, automatically leads to the extension of the initial non negative polar radial coordinate r to a cartesian coordinate running from to , thus introducing in a natural way the region in the above spacetimes. Using Boyer-Lindquist and ellipsoidal coordinates, we discuss some geometrical aspects of the positive and negative regions of , like horizons, ergosurfaces, and foliation structures

  3. Algorithms for skiascopy measurement automatization

    NASA Astrophysics Data System (ADS)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  4. Wire Detection Algorithms for Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.

    2002-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning

  5. Ordered subsets algorithms for transmission tomography.

    PubMed

    Erdogan, H; Fessler, J A

    1999-11-01

    The ordered subsets EM (OSEM) algorithm has enjoyed considerable interest for emission image reconstruction due to its acceleration of the original EM algorithm and ease of programming. The transmission EM reconstruction algorithm converges very slowly and is not used in practice. In this paper, we introduce a simultaneous update algorithm called separable paraboloidal surrogates (SPS) that converges much faster than the transmission EM algorithm. Furthermore, unlike the 'convex algorithm' for transmission tomography, the proposed algorithm is monotonic even with nonzero background counts. We demonstrate that the ordered subsets principle can also be applied to the new SPS algorithm for transmission tomography to accelerate 'convergence', albeit with similar sacrifice of global convergence properties as for OSEM. We implemented and evaluated this ordered subsets transmission (OSTR) algorithm. The results indicate that the OSTR algorithm speeds up the increase in the objective function by roughly the number of subsets in the early iterates when compared to the ordinary SPS algorithm. We compute mean square errors and segmentation errors for different methods and show that OSTR is superior to OSEM applied to the logarithm of the transmission data. However, penalized-likelihood reconstructions yield the best quality images among all other methods tested. PMID:10588288

  6. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  7. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  8. A compilation of jet finding algorithms

    SciTech Connect

    Flaugher, B.; Meier, K.

    1992-12-31

    Technical descriptions of jet finding algorithms currently in use in p{anti p} collider experiments (CDF, UA1, UA2), e{sup +}e{sup {minus}} experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. For the hadron collider experiments, the clustering methods fall into two categories: cone algorithms and nearest-neighbor algorithms. In addition, UA2 has employed a combination of both methods for some analysis. While there are clearly differences between the cone and nearest-neighbor algorithms, the authors have found that there are also differences among the cone algorithms in the details of how the centroid of a cone cluster is located and how the E{sub T} and P{sub T} of the jet are defined. The most commonly used jet algorithm in electron-positron experiments is the JADE-type cluster algorithm. Five various incarnations of this approach have been described.

  9. A Synthesized Heuristic Task Scheduling Algorithm

    PubMed Central

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244

  10. Search properties of some sequential decoding algorithms.

    NASA Technical Reports Server (NTRS)

    Geist, J. M.

    1973-01-01

    Sequential decoding procedures are studied in the context of selecting a path through a tree. Several algorithms are considered, and their properties are compared. It is shown that the stack algorithm introduced by Zigangirov (1966) and by Jelinek (1969) is essentially equivalent to the Fano algorithm with regard to the set of nodes examined and the path selected, although the description, implementation, and action of the two algorithms are quite different. A modified Fano algorithm is introduced, in which the quantizing parameter is eliminated. It can be inferred from limited simulation results that, at least in some applications, the new algorithm is computationally inferior to the old. However, it is of some theoretical interest since the conventional Fano algorithm may be considered to be a quantized version of it.

  11. An efficient parallel termination detection algorithm

    SciTech Connect

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.

  12. The Aquarius Salinity Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank; Hilburn, Kyle; Lagerloef, Gary; Le Vine, David

    2012-01-01

    The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration [2] converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to molecular oxygen, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind, which is addressed in more detail in section 3. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water [3], [4] and an auxiliary field for the sea surface temperature. In the current processing only v-pol TB are used for this last step.

  13. Region processing algorithm for HSTAMIDS

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, Dominic K. C.

    2006-05-01

    The AN/PSS-14 (a.k.a. HSTAMIDS) has been tested for its performance in South East Asia, Thailand), South Africa (Namibia) and in November of 2005 in South West Asia (Afghanistan). The system has been proven effective in manual demining particularly in discriminating indigenous, metallic artifacts in the minefields. The Humanitarian Demining Research and Development (HD R&D) Program has sought to further improve the system to address specific needs in several areas. One particular area of these improvement efforts is the development of a mine detection/discrimination improvement software algorithm called Region Processing (RP). RP is an innovative technique in processing and is designed to work on a set of data acquired in a unique sweep pattern over a region-of-interest (ROI). The RP team is a joint effort consisting of three universities (University of Florida, University of Missouri, and Duke University), but is currently being led by the University of Florida. This paper describes the state-of-the-art Region Processing algorithm, its implementation into the current HSTAMIDS system, and its most recent test results.

  14. Enhanced algorithms for stochastic programming

    SciTech Connect

    Krishna, A.S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.

  15. Digital Shaping Algorithms for GODDESS

    NASA Astrophysics Data System (ADS)

    Lonsdale, Sarah-Jane; Cizewski, Jolie; Ratkiewicz, Andrew; Pain, Steven

    2014-09-01

    Gammasphere-ORRUBA: Dual Detectors for Experimental Structure Studies (GODDESS) combines the highly segmented position-sensitive silicon strip detectors of ORRUBA with up to 110 Compton-suppressed HPGe detectors from Gammasphere, for high resolution for particle-gamma coincidence measurements. The signals from the silicon strip detectors have position-dependent rise times, and require different forms of pulse shaping for optimal position and energy resolutions. Traditionally, a compromise was achieved with a single shaping of the signals performed by conventional analog electronics. However, there are benefits to using digital acquisition of the detector signals, including the ability to apply multiple custom shaping algorithms to the same signal, each optimized for position and energy, in addition to providing a flexible triggering system, and a reduction in rate-limitation due to pile-up. Recent developments toward creating digital signal processing algorithms for GODDESS will be discussed. This work is supported in part by the U.S. D.O.E. and N.S.F.

  16. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  17. Ligand Identification Scoring Algorithm (LISA)

    PubMed Central

    Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101

  18. HYBRID FAST HANKEL TRANSFORM ALGORITHM FOR ELECTROMAGNETIC MODELING

    EPA Science Inventory

    A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram ...

  19. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans. PMID:17926698

  20. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  1. A Probabilistic Cell Tracking Algorithm

    NASA Astrophysics Data System (ADS)

    Steinacker, Reinhold; Mayer, Dieter; Leiding, Tina; Lexer, Annemarie; Umdasch, Sarah

    2013-04-01

    The research described below was carried out during the EU-Project Lolight - development of a low cost, novel and accurate lightning mapping and thunderstorm (supercell) tracking system. The Project aims to develop a small-scale tracking method to determine and nowcast characteristic trajectories and velocities of convective cells and cell complexes. The results of the algorithm will provide a higher accuracy than current locating systems distributed on a coarse scale. Input data for the developed algorithm are two temporally separated lightning density fields. Additionally a Monte Carlo method minimizing a cost function is utilizied which leads to a probabilistic forecast for the movement of thunderstorm cells. In the first step the correlation coefficients between the first and the second density field are computed. Hence, the first field is shifted by all shifting vectors which are physically allowed. The maximum length of each vector is determined by the maximum possible speed of thunderstorm cells and the difference in time for both density fields. To eliminate ambiguities in determination of directions and velocities, the so called Random Walker of the Monte Carlo process is used. Using this method a grid point is selected at random. Moreover, one vector out of all predefined shifting vectors is suggested - also at random but with a probability that is related to the correlation coefficient. If this exchange of shifting vectors reduces the cost function, the new direction and velocity are accepted. Otherwise it is discarded. This process is repeated until the change of cost functions falls below a defined threshold. The Monte Carlo run gives information about the percentage of accepted shifting vectors for all grid points. In the course of the forecast, amplifications of cell density are permitted. For this purpose, intensity changes between the investigated areas of both density fields are taken into account. Knowing the direction and speed of thunderstorm

  2. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience. PMID:27227718

  3. Online Planning Algorithms for POMDPs

    PubMed Central

    Ross, Stéphane; Pineau, Joelle; Paquet, Sébastien; Chaib-draa, Brahim

    2009-01-01

    Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. PMID:19777080

  4. [Algorithm for treating preoperative anemia].

    PubMed

    Bisbe Vives, E; Basora Macaya, M

    2015-06-01

    Hemoglobin optimization and treatment of preoperative anemia in surgery with a moderate to high risk of surgical bleeding reduces the rate of transfusions and improves hemoglobin levels at discharge and can also improve postoperative outcomes. To this end, we need to schedule preoperative visits sufficiently in advance to treat the anemia. The treatment algorithm we propose comes with a simple checklist to determine whether we should refer the patient to a specialist or if we can treat the patient during the same visit. With the blood count test and additional tests for iron metabolism, inflammation parameter and glomerular filtration rate, we can decide whether to start the treatment with intravenous iron alone or erythropoietin with or without iron. With significant anemia, a visit after 15 days might be necessary to observe the response and supplement the treatment if required. The hemoglobin objective will depend on the type of surgery and the patient's characteristics. PMID:26320341

  5. [A simple algorithm for anemia].

    PubMed

    Egyed, Miklós

    2014-03-01

    The author presents a novel algorithm for anaemia based on the erythrocyte haemoglobin content. The scheme is based on the aberrations of erythropoiesis and not on the pathophysiology of anaemia. The hemoglobin content of one erytrocyte is between 28-35 picogram. Any disturbance in hemoglobin synthesis can lead to a lower than 28 picogram hemoglobin content of the erythrocyte which will lead to hypochromic anaemia. In contrary, disturbances of nucleic acid metabolism will result in a hemoglobin content greater than 36 picogram, and this will result in hyperchromic anaemia. Normochromic anemia, characterised by hemoglobin content of erythrocytes between 28 and 35 picogram, is the result of alteration in the proliferation of erythropoeisis. Based on these three categories of anaemia, a unique system can be constructed, which can be used as a model for basic laboratory investigations and work-up of anaemic patients. PMID:24583558

  6. Measuring anomaly with algorithmic entropy

    NASA Astrophysics Data System (ADS)

    Solano, Wanda M.

    Anomaly detection refers to the identification of observations that are considered outside of normal. Since they are unknown to the system prior to training and rare, the anomaly detection problem is particularly challenging. Model based techniques require large quantities of existing data are to build the model. Statistically based techniques result in the use of statistical metrics or thresholds for determining whether a particular observation is anomalous. I propose a novel approach to anomaly detection using wavelet based algorithmic entropy that does not require modeling or large amounts of data. My method embodies the concept of information distance that rests on the fact that data encodes information. This distance is large when little information is shared, and small when there is greater information sharing. I compare my approach with several techniques in the literature using data obtained from testing of NASA's Space Shuttle Main Engines (SSME)

  7. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  8. Improved Heat-Stress Algorithm

    NASA Technical Reports Server (NTRS)

    Teets, Edward H., Jr.; Fehn, Steven

    2007-01-01

    NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.

  9. Virtual Crystals and Kleber's Algorithm

    NASA Astrophysics Data System (ADS)

    Okado, Masato; Schilling, Anne; Shimozono, Mark

    Kirillov and Reshetikhin conjectured what is now known as the fermionic formula for the decomposition of tensor products of certain finite dimensional modules over quantum affine algebras. This formula can also be extended to the case of q-deformations of tensor product multiplicities as recently conjectured by Hatayama et al. In its original formulation it is difficult to compute the fermionic formula efficiently. Kleber found an algorithm for the simply-laced algebras which overcomes this problem. We present a method which reduces all other cases to the simply-laced case using embeddings of affine algebras. This is the fermionic analogue of the virtual crystal construction by the authors, which is the realization of crystal graphs for arbitrary quantum affine algebras in terms of those of simply-laced type.

  10. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  11. Anaphora Resolution Algorithm for Sanskrit

    NASA Astrophysics Data System (ADS)

    Pralayankar, Pravin; Devi, Sobha Lalitha

    This paper presents an algorithm, which identifies different types of pronominal and its antecedents in Sanskrit, an Indo-European language. The computational grammar implemented here uses very familiar concepts such as clause, subject, object etc., which are identified with the help of morphological information and concepts such as precede and follow. It is well known that natural languages contain anaphoric expressions, gaps and elliptical constructions of various kinds and that understanding of natural languages involves assignment of interpretations to these elements. Therefore, it is only to be expected that natural language understanding systems must have the necessary mechanism to resolve the same. The method we adopt here for resolving the anaphors is by exploiting the morphological richness of the language. The system is giving encouraging results when tested with a small corpus.

  12. Novel MRC algorithms using GPGPU

    NASA Astrophysics Data System (ADS)

    Kato, Kokoro; Taniguchi, Yoshiyuki; Inoue, Tadao; Kadota, Kazuya

    2012-06-01

    GPGPU (General Purpose Graphic Processor Unit) has been attracting many engineers and scientists who develop their own software for massive numerical computation. With hundreds of core-processors and tens of thousands of threads operating concurrently, GPGPU programs can run significantly fast if their software architecture is well optimized. The basic program model used in GPGPU is SIMD (Single Instruction Multiple Data stream), and one must adapt his programming model to SIMD. However, conditional branching is fundamentally not allowed in SIMD and this limitation is quite challenging to apply GPGPU to photomask related software such as MDP or MRC. In this paper unique methods are proposed to utilize GPU for MRC operation. We explain novel algorithms of mask layout verification by GPGPU.

  13. Advanced spectral signature discrimination algorithm

    NASA Astrophysics Data System (ADS)

    Chakravarty, Sumit; Cao, Wenjie; Samat, Alim

    2013-05-01

    This paper presents a novel approach to the task of hyperspectral signature analysis. Hyperspectral signature analysis has been studied a lot in literature and there has been a lot of different algorithms developed which endeavors to discriminate between hyperspectral signatures. There are many approaches for performing the task of hyperspectral signature analysis. Binary coding approaches like SPAM and SFBC use basic statistical thresholding operations to binarize a signature which are then compared using Hamming distance. This framework has been extended to techniques like SDFC wherein a set of primate structures are used to characterize local variations in a signature together with the overall statistical measures like mean. As we see such structures harness only local variations and do not exploit any covariation of spectrally distinct parts of the signature. The approach of this research is to harvest such information by the use of a technique similar to circular convolution. In the approach we consider the signature as cyclic by appending the two ends of it. We then create two copies of the spectral signature. These three signatures can be placed next to each other like the rotating discs of a combination lock. We then find local structures at different circular shifts between the three cyclic spectral signatures. Texture features like in SDFC can be used to study the local structural variation for each circular shift. We can then create different measure by creating histogram from the shifts and thereafter using different techniques for information extraction from the histograms. Depending on the technique used different variant of the proposed algorithm are obtained. Experiments using the proposed technique show the viability of the proposed methods and their performances as compared to current binary signature coding techniques.

  14. SLAP lesions: a treatment algorithm.

    PubMed

    Brockmeyer, Matthias; Tompkins, Marc; Kohn, Dieter M; Lorbach, Olaf

    2016-02-01

    Tears of the superior labrum involving the biceps anchor are a common entity, especially in athletes, and may highly impair shoulder function. If conservative treatment fails, successful arthroscopic repair of symptomatic SLAP lesions has been described in the literature particularly for young athletes. However, the results in throwing athletes are less successful with a significant amount of patients who will not regain their pre-injury level of performance. The clinical results of SLAP repairs in middle-aged and older patients are mixed, with worse results and higher revision rates as compared to younger patients. In this population, tenotomy or tenodesis of the biceps tendon is a viable alternative to SLAP repairs in order to improve clinical outcomes. The present article introduces a treatment algorithm for SLAP lesions based upon the recent literature as well as the authors' clinical experience. The type of lesion, age of patient, concomitant lesions, and functional requirements, as well as sport activity level of the patient, need to be considered. Moreover, normal variations and degenerative changes in the SLAP complex have to be distinguished from "true" SLAP lesions in order to improve results and avoid overtreatment. The suggestion for a treatment algorithm includes: type I: conservative treatment or arthroscopic debridement, type II: SLAP repair or biceps tenotomy/tenodesis, type III: resection of the instable bucket-handle tear, type IV: SLAP repair (biceps tenotomy/tenodesis if >50 % of biceps tendon is affected), type V: Bankart repair and SLAP repair, type VI: resection of the flap and SLAP repair, and type VII: refixation of the anterosuperior labrum and SLAP repair. PMID:26818554

  15. Consensus algorithms in decentralized networks

    NASA Astrophysics Data System (ADS)

    Coduti, Leonardo Phillip

    We consider a decentralized network with the following goal: the state at each node of the network iteratively converges to the same value. Ensuring that this goal is achieved requires certain properties of the topology of the network and the function describing the evolution of the network. We will present these properties for deterministic systems, extending current results in the literature. As an additional contribution, we will show how the convergence results for stochastic systems are direct consequences of the corresponding deterministic systems, drastically simplifying many other current results. In general, these consensus systems can be both time invariant and time varying, and we will extend all our deterministic and stochastic results to include time varying systems as well. We will then consider a more complex consensus problem, the resource allocation problem. In this situation each node of the network has both a state and a capacity. The capacity is a monotone increasing function of the state, and the goal is for the nodes to exchange capacity in a decentralized manner in order to drive all of the states to the same value. Conditions ensuring consensus in the deterministic setting will be presented, and we will show how convergence in this system also comes from the fundamental deterministic result for consensus algorithms. The main results will again be extended to stochastic and time varying systems. The linear consensus system requires the construction of a matrix of weighting parameters with specific properties. We present an iterative algorithm for determining the weighting parameters in a decentralized fashion; the weighting parameters are specified by the nodes and each node only specifies the weighting parameters as sociated with that node. The results assume that the communication graph of the network is directed, and we consider both synchronous communication, and stochastic asynchronous networks.

  16. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  17. On mapping systolic algorithms onto the hypercube

    SciTech Connect

    Ibarra, O.H.; Sohn, S.M. )

    1990-01-01

    Much effort has been devoted toward developing efficient algorithms for systolic arrays. Here the authors consider the problem of mapping these algorithms into efficient algorithms for a fixed-size hypercube architecture. They describe in detail several optimal implementations of algorithms given for one-way one and two-dimensional systolic arrays. Since interprocessor communication is many times slower than local computation in parallel computers built to date, the problem of efficient communication is specifically addressed for these mappings. In order to experimentally validate the technique, five systolic algorithms were mapped in various ways onto a 64-node NCUBE/7 MMD hypercube machine. The algorithms are for the following problems: the shuffle scheduling problem, finite impulse response filtering, linear context-free language recognition, matrix multiplication, and computing the Boolean transitive closure. Experimental evidence indicates that good performance is obtained for the mappings.

  18. An improved Camshift algorithm for target recognition

    NASA Astrophysics Data System (ADS)

    Fu, Min; Cai, Chao; Mao, Yusu

    2015-12-01

    Camshift algorithm and three frame difference algorithm are the popular target recognition and tracking methods. Camshift algorithm requires a manual initialization of the search window, which needs the subjective error and coherence, and only in the initialization calculating a color histogram, so the color probability model cannot be updated continuously. On the other hand, three frame difference method does not require manual initialization search window, it can make full use of the motion information of the target only to determine the range of motion. But it is unable to determine the contours of the object, and can not make use of the color information of the target object. Therefore, the improved Camshift algorithm is proposed to overcome the disadvantages of the original algorithm, the three frame difference operation is combined with the object's motion information and color information to identify the target object. The improved Camshift algorithm is realized and shows better performance in the recognition and tracking of the target.

  19. ENAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Yang, Zhen-wen; Shen, Tian-shuang; Chen, Bo

    2012-11-01

    mage of objects is inevitably encountered by space-based working in the atmospheric turbulence environment, such as those used in astronomy, remote sensing and so on. The observed images are seriously blurred. The restoration is required for reconstruction turbulence degraded images. In order to enhance the performance of image restoration, a novel enhanced nonnegativity and support constants recursive inverse filtering(ENAS-RIF) algorithm was presented, which was based on the reliable support region and enhanced cost function. Firstly, the Curvelet denoising algorithm was used to weaken image noise. Secondly, the reliable object support region estimation was used to accelerate the algorithm convergence. Then, the average gray was set as the gray of image background pixel. Finally, an object construction limit and the logarithm function were add to enhance algorithm stability. The experimental results prove that the convergence speed of the novel ENAS-RIF algorithm is faster than that of NAS-RIF algorithm and it is better in image restoration.

  20. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  1. Adaptive link selection algorithms for distributed estimation

    NASA Astrophysics Data System (ADS)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  2. Spatial search algorithms on Hanoi networks

    NASA Astrophysics Data System (ADS)

    Marquezino, Franklin de Lima; Portugal, Renato; Boettcher, Stefan

    2013-01-01

    We use the abstract search algorithm and its extension due to Tulsi to analyze a spatial quantum search algorithm that finds a marked vertex in Hanoi networks of degree 4 faster than classical algorithms. We also analyze the effect of using non-Groverian coins that take advantage of the small-world structure of the Hanoi networks. We obtain the scaling of the total cost of the algorithm as a function of the number of vertices. We show that Tulsi's technique plays an important role to speed up the searching algorithm. We can improve the algorithm's efficiency by choosing a non-Groverian coin if we do not implement Tulsi's method. Our conclusions are based on numerical implementations.

  3. An algorithmic framework for multiobjective optimization.

    PubMed

    Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  4. Orbital objects detection algorithm using faint streaks

    NASA Astrophysics Data System (ADS)

    Tagawa, Makoto; Yanagisawa, Toshifumi; Kurosaki, Hirohisa; Oda, Hiroshi; Hanada, Toshiya

    2016-02-01

    This study proposes an algorithm to detect orbital objects that are small or moving at high apparent velocities from optical images by utilizing their faint streaks. In the conventional object-detection algorithm, a high signal-to-noise-ratio (e.g., 3 or more) is required, whereas in our proposed algorithm, the signals are summed along the streak direction to improve object-detection sensitivity. Lower signal-to-noise ratio objects were detected by applying the algorithm to a time series of images. The algorithm comprises the following steps: (1) image skewing, (2) image compression along the vertical axis, (3) detection and determination of streak position, (4) searching for object candidates using the time-series streak-position data, and (5) selecting the candidate with the best linearity and reliability. Our algorithm's ability to detect streaks with signals weaker than the background noise was confirmed using images from the Australia Remote Observatory.

  5. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  6. Voronoi particle merging algorithm for PIC codes

    NASA Astrophysics Data System (ADS)

    Luu, Phuc T.; Tückmantel, T.; Pukhov, A.

    2016-05-01

    We present a new particle-merging algorithm for the particle-in-cell method. Based on the concept of the Voronoi diagram, the algorithm partitions the phase space into smaller subsets, which consist of only particles that are in close proximity in the phase space to each other. We show the performance of our algorithm in the case of the two-stream instability and the magnetic shower.

  7. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  8. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  9. Evolutionary Algorithm for Optimal Vaccination Scheme

    NASA Astrophysics Data System (ADS)

    Parousis-Orthodoxou, K. J.; Vlachos, D. S.

    2014-03-01

    The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.

  10. Sequential and Parallel Algorithms for Spherical Interpolation

    NASA Astrophysics Data System (ADS)

    De Rossi, Alessandra

    2007-09-01

    Given a large set of scattered points on a sphere and their associated real values, we analyze sequential and parallel algorithms for the construction of a function defined on the sphere satisfying the interpolation conditions. The algorithms we implemented are based on a local interpolation method using spherical radial basis functions and the Inverse Distance Weighted method. Several numerical results show accuracy and efficiency of the algorithms.

  11. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  12. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  13. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  14. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  15. Algorithm to search for genomic rearrangements

    NASA Astrophysics Data System (ADS)

    Nałecz-Charkiewicz, Katarzyna; Nowak, Robert

    2013-10-01

    The aim of this article is to discuss the issue of comparing nucleotide sequences in order to detect chromosomal rearrangements (for example, in the study of genomes of two cucumber varieties, Polish and Chinese). Two basic algorithms for detecting rearrangements has been described: Smith-Waterman algorithm, as well as a new method of searching genetic markers in combination with Knuth-Morris-Pratt algorithm. The computer program in client-server architecture was developed. The algorithms properties were examined on genomes Escherichia coli and Arabidopsis thaliana genomes, and are prepared to compare two cucumber varieties, Polish and Chinese. The results are promising and further works are planned.

  16. Java implementation of Class Association Rule algorithms

    2007-08-30

    Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix andmore » a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.« less

  17. A Unifying Multibody Dynamics Algorithm Development Workbench

    NASA Technical Reports Server (NTRS)

    Ziegler, John L.

    2005-01-01

    The development of new and efficient algorithms for multibody dynamics has been an important research area. These algorithms are used for modeling, simulation, and control of systems such as spacecraft, robotic systems, automotive applications, the human body, manufacturing operations, and micro-electromechanical systems (MEMS). At JPL's Dynamics and Real Time Simulation (DARTS) Laboratory we have developed software that serves as a computational workbench for these algorithms. This software utilizes the mathematical perspective of the spatial operator algebra, which allows the development of dynamics algorithms and new insights into multibody dynamics.

  18. A new frame-based registration algorithm.

    PubMed

    Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834

  19. Unifying parametrized VLSI Jacobi algorithms and architectures

    NASA Astrophysics Data System (ADS)

    Deprettere, Ed F. A.; Moonen, Marc

    1993-11-01

    Implementing Jacobi algorithms in parallel VLSI processor arrays is a non-trivial task, in particular when the algorithms are parametrized with respect to size and the architectures are parametrized with respect to space-time trade-offs. The paper is concerned with an approach to implement several time-adaptive Jacobi-type algorithms on a parallel processor array, using only Cordic arithmetic and asynchronous communications, such that any degree of parallelism, ranging from single-processor up to full-size array implementation, is supported by a `universal' processing unit. This result is attributed to a gracious interplay between algorithmic and architectural engineering.

  20. Thermostat algorithm for generating target ensembles

    NASA Astrophysics Data System (ADS)

    Bravetti, A.; Tapias, D.

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  1. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  2. Java implementation of Class Association Rule algorithms

    SciTech Connect

    Tamura, Makio

    2007-08-30

    Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix and a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.

  3. Practical algorithmic probability: an image inpainting example

    NASA Astrophysics Data System (ADS)

    Potapov, Alexey; Scherbakov, Oleg; Zhdanov, Innokentii

    2013-12-01

    Possibility of practical application of algorithmic probability is analyzed on an example of image inpainting problem that precisely corresponds to the prediction problem. Such consideration is fruitful both for the theory of universal prediction and practical image inpaiting methods. Efficient application of algorithmic probability implies that its computation is essentially optimized for some specific data representation. In this paper, we considered one image representation, namely spectral representation, for which an image inpainting algorithm is proposed based on the spectrum entropy criterion. This algorithm showed promising results in spite of very simple representation. The same approach can be used for introducing ALP-based criterion for more powerful image representations.

  4. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems. PMID:26192504

  5. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320

  6. The annealing robust backpropagation (ARBP) learning algorithm.

    PubMed

    Chuang, C C; Su, S F; Hsiao, C C

    2000-01-01

    Multilayer feedforward neural networks are often referred to as universal approximators. Nevertheless, if the used training data are corrupted by large noise, such as outliers, traditional backpropagation learning schemes may not always come up with acceptable performance. Even though various robust learning algorithms have been proposed in the literature, those approaches still suffer from the initialization problem. In those robust learning algorithms, the so-called M-estimator is employed. For the M-estimation type of learning algorithms, the loss function is used to play the role in discriminating against outliers from the majority by degrading the effects of those outliers in learning. However, the loss function used in those algorithms may not correctly discriminate against those outliers. In this paper, the annealing robust backpropagation learning algorithm (ARBP) that adopts the annealing concept into the robust learning algorithms is proposed to deal with the problem of modeling under the existence of outliers. The proposed algorithm has been employed in various examples. Those results all demonstrated the superiority over other robust learning algorithms independent of outliers. In the paper, not only is the annealing concept adopted into the robust learning algorithms but also the annealing schedule k/t was found experimentally to achieve the best performance among other annealing schedules, where k is a constant and is the epoch number. PMID:18249835

  7. Distilling the Verification Process for Prognostics Algorithms

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai

    2013-01-01

    The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.

  8. Overview of an Algorithm Plugin Package (APP)

    NASA Astrophysics Data System (ADS)

    Linda, M.; Tilmes, C.; Fleig, A. J.

    2004-12-01

    Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.

  9. Ascent guidance algorithm using lidar wind measurements

    NASA Technical Reports Server (NTRS)

    Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.

    1990-01-01

    The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.

  10. Generation of attributes for learning algorithms

    SciTech Connect

    Hu, Yuh-Jyh; Kibler, D.

    1996-12-31

    Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.

  11. A Support Vector Machine Blind Equalization Algorithm Based on Immune Clone Algorithm

    NASA Astrophysics Data System (ADS)

    Yecai, Guo; Rui, Ding

    Aiming at affecting of the parameter selection method of support vector machine(SVM) on its application in blind equalization algorithm, a SVM constant modulus blind equalization algorithm based on immune clone selection algorithm(CSA-SVM-CMA) is proposed. In this proposed algorithm, the immune clone algorithm is used to optimize the parameters of the SVM on the basis advantages of its preventing evolutionary precocious, avoiding local optimum, and fast convergence. The proposed algorithm can improve the parameter selection efficiency of SVM constant modulus blind equalization algorithm(SVM-CMA) and overcome the defect of the artificial setting parameters. Accordingly, the CSA-SVM-CMA has faster convergence rate and smaller mean square error than the SVM-CMA. Computer simulations in underwater acoustic channels have proved the validity of the algorithm.

  12. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  13. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  14. Control algorithms for dynamic attenuators

    SciTech Connect

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-15

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  15. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  16. Formation Algorithms and Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward

    2004-01-01

    Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.

  17. The algorithmic origins of life

    PubMed Central

    Walker, Sara Imari; Davies, Paul C. W.

    2013-01-01

    Although it has been notoriously difficult to pin down precisely what is it that makes life so distinctive and remarkable, there is general agreement that its informational aspect is one key property, perhaps the key property. The unique informational narrative of living systems suggests that life may be characterized by context-dependent causal influences, and, in particular, that top-down (or downward) causation—where higher levels influence and constrain the dynamics of lower levels in organizational hierarchies—may be a major contributor to the hierarchal structure of living systems. Here, we propose that the emergence of life may correspond to a physical transition associated with a shift in the causal structure, where information gains direct and context-dependent causal efficacy over the matter in which it is instantiated. Such a transition may be akin to more traditional physical transitions (e.g. thermodynamic phase transitions), with the crucial distinction that determining which phase (non-life or life) a given system is in requires dynamical information and therefore can only be inferred by identifying causal architecture. We discuss some novel research directions based on this hypothesis, including potential measures of such a transition that may be amenable to laboratory study, and how the proposed mechanism corresponds to the onset of the unique mode of (algorithmic) information processing characteristic of living systems. PMID:23235265

  18. Multivariate Spline Algorithms for CAGD

    NASA Technical Reports Server (NTRS)

    Boehm, W.

    1985-01-01

    Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.

  19. Macroparticle merging algorithm for PIC

    NASA Astrophysics Data System (ADS)

    Vranic, Marija; Grismayer, Thomas; Martins, Joana L.; Fonseca, Ricardo A.; Silva, Luis O.

    2014-10-01

    With the development of large supercomputers (>1000000 cores), the complexity of the problems we are able to simulate with particle-in-cell (PIC) codes has increased substantially. However, localized density spikes can introduce load imbalance where a small fraction of cores is occupied, while the others remain idle. An additional challenge lies in self-consistent modeling of QED effects at ultra-high laser intensities (I > 1023 W/cm2), where the number of pairs produced sometimes grows exponentially and may reach beyond the maximum number of particles that each processor can handle. We can overcome this by resampling the 6D phase space: the macroparticles can be merged into fewer particles with higher particle weights. Existing merging scheme preserves the total charge, but not the particle distribution. Here we present a novel particle-merging algorithm that preserves the energy, momentum and charge locally and thereby minimizes the potential influence to the relevant physics. Through examples of classical plasma physics and more extreme scenarios, we show that the physics is not altered but we obtain an immense increase in performance.

  20. Reliability measure for segmenting algorithms

    NASA Astrophysics Data System (ADS)

    Alvarez, Robert E.

    2004-05-01

    Segmenting is a key initial step in many computer-aided detection (CAD) systems. Our purpose is to develop a method to estimate the reliability of segmenting algorithm results. We use a statistical shape model computed using principal component analysis. The model retains a small number of eigenvectors, or modes, that represent a large fraction of the variance. The residuals between the segmenting result and its projection into the space of retained modes are computed. The sum of the squares of residuals is transformed to a zero-mean, unit standard deviation Gaussian random variable. We also use the standardized scale parameter. The reliability measure is the probability that the transformed residuals and scale parameter are greater than the absolute value of the observed values. We tested the reliability measure with thirty chest x-ray images with "leave-out-one" testing. The Gaussian assumption was verified using normal probability plots. For each image, a statistical shape model was computed from the hand-digitized data of the rest of the images in the training set. The residuals and scale parameter with automated segment results for the image were used to compute the reliability measure in each case. The reliability measure was significantly lower for two images in the training set with unusual lung fields or processing errors. The data and Matlab scripts for reproducing the figures are at http://www.aprendtech.com/papers/relmsr.zip Errors detected by the new reliability measure can be used to adjust processing or warn the user.

  1. Advanced algorithms for information science

    SciTech Connect

    Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.

  2. Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm

    NASA Astrophysics Data System (ADS)

    Choi, Shinkook; Baek, Jongduk

    2015-03-01

    In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.

  3. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  4. Gaining Algorithmic Insight through Simplifying Constraints.

    ERIC Educational Resources Information Center

    Ginat, David

    2002-01-01

    Discusses algorithmic problem solving in computer science education, particularly algorithmic insight, and focuses on the relevance and effectiveness of the heuristic simplifying constraints which involves simplification of a given problem to a problem in which constraints are imposed on the input data. Presents three examples involving…

  5. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  6. A Runge-Kutta Nystrom algorithm.

    NASA Technical Reports Server (NTRS)

    Bettis, D. G.

    1973-01-01

    A Runge-Kutta algorithm of order five is presented for the solution of the initial value problem where the system of ordinary differential equations is of second order and does not contain the first derivative. The algorithm includes the Fehlberg step control procedure.

  7. Trees, bialgebras and intrinsic numerical algorithms

    NASA Technical Reports Server (NTRS)

    Crouch, Peter; Grossman, Robert; Larson, Richard

    1990-01-01

    Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.

  8. The Porter Stemming Algorithm: Then and Now

    ERIC Educational Resources Information Center

    Willett, Peter

    2006-01-01

    Purpose: In 1980, Porter presented a simple algorithm for stemming English language words. This paper summarises the main features of the algorithm, and highlights its role not just in modern information retrieval research, but also in a range of related subject domains. Design/methodology/approach: Review of literature and research involving use…

  9. Pitch-Learning Algorithm For Speech Encoders

    NASA Technical Reports Server (NTRS)

    Bhaskar, B. R. Udaya

    1988-01-01

    Adaptive algorithm detects and corrects errors in sequence of estimates of pitch period of speech. Algorithm operates in conjunction with techniques used to estimate pitch period. Used in such parametric and hybrid speech coders as linear predictive coders and adaptive predictive coders.

  10. Kalman plus weights: a time scale algorithm

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2001-01-01

    KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.

  11. Algorithm for genome contig assembly. Final report

    SciTech Connect

    1995-09-01

    An algorithm was developed for genome contig assembly which extended the range of data types that could be included in assembly and which ran on the order of a hundred times faster than the algorithm it replaced. Maps of all existing cosmid clone and YAC data at the Human Genome Information Resource were assembled using ICA. The resulting maps are summarized.

  12. Performance analysis of cone detection algorithms.

    PubMed

    Mariotti, Letizia; Devaney, Nicholas

    2015-04-01

    Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758

  13. IUS guidance algorithm gamma guide assessment

    NASA Technical Reports Server (NTRS)

    Bray, R. E.; Dauro, V. A.

    1980-01-01

    The Gamma Guidance Algorithm which controls the inertial upper stage is described. The results of an independent assessment of the algorithm's performance in satisfying the NASA missions' targeting objectives are presented. The results of a launch window analysis for a Galileo mission, and suggested improvements are included.

  14. Faster Algorithms on Branch and Clique Decompositions

    NASA Astrophysics Data System (ADS)

    Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin

    We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.

  15. Optical Sensor Based Corn Algorithm Evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Optical sensor based algorithms for corn fertilization have developed by researchers in several states. The goal of this international research project was to evaluate these different algorithms and determine their robustness over a large geographic area. Concurrently the goal of this project was to...

  16. Explaining the Cross-Multiplication Algorithm

    ERIC Educational Resources Information Center

    Handa, Yuichi

    2009-01-01

    Many high-school mathematics teachers have likely been asked by a student, "Why does the cross-multiplication algorithm work?" It is a commonly used algorithm when dealing with proportion problems, conversion of units, or fractional linear equations. For most teachers, the explanation usually involves the idea of finding a common denominator--one…

  17. Global Optimality of the Successive Maxbet Algorithm.

    ERIC Educational Resources Information Center

    Hanafi, Mohamed; ten Berge, Jos M. F.

    2003-01-01

    It is known that the Maxbet algorithm, which is an alternative to the method of generalized canonical correlation analysis and Procrustes analysis, may converge to local maxima. Discusses an eigenvalue criterion that is sufficient, but not necessary, for global optimality of the successive Maxbet algorithm. (SLD)

  18. Genetic Algorithms with Local Minimum Escaping Technique

    NASA Astrophysics Data System (ADS)

    Tamura, Hiroki; Sakata, Kenichiro; Tang, Zheng; Ishii, Masahiro

    In this paper, we propose a genetic algorithm(GA) with local minimum escaping technique. This proposed method uses the local minimum escaping techique. It can escape from the local minimum by correcting parameters when genetic algorithm falls into a local minimum. Simulations are performed to scheduling problem without buffer capacity using this proposed method, and its validity is shown.

  19. Excursion-Set-Mediated Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Noever, David; Baskaran, Subbiah

    1995-01-01

    Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.

  20. Evaluation of TCP congestion control algorithms.

    SciTech Connect

    Long, Robert Michael

    2003-12-01

    Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.

  1. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  2. Force-Control Algorithm for Surface Sampling

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Quadrelli, Marco B.; Phan, Linh

    2008-01-01

    A G-FCON algorithm is designed for small-body surface sampling. It has a linearization component and a feedback component to enhance performance. The algorithm regulates the contact force between the tip of a robotic arm attached to a spacecraft and a surface during sampling.

  3. A Stemming Algorithm for Latin Text Databases.

    ERIC Educational Resources Information Center

    Schinke, Robyn; And Others

    1996-01-01

    Describes the design of a stemming algorithm for searching Latin text databases. The algorithm uses a longest-match approach with some recoding but differs from most stemmers in its use of two separate suffix dictionaries for processing query and database words that enables users to pursue specific searches for single grammatical forms of words.…

  4. Algorithmic Mechanism Design of Evolutionary Computation.

    PubMed

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  5. A quantum Algorithm for the Moebius Function

    NASA Astrophysics Data System (ADS)

    Love, Peter

    We give an efficient quantum algorithm for the Moebius function from the natural numbers to -1,0,1. The cost of the algorithm is asymptotically quadratic in log n and does not require the computation of the prime factorization of n as an intermediate step.

  6. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  7. A Generalization of Takane's Algorithm for DEDICOM.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; And Others

    1990-01-01

    An algorithm is described for fitting the DEDICOM model (proposed by R. A. Harshman in 1978) for the analysis of asymmetric data matrices. The method modifies a procedure proposed by Y. Takane (1985) to provide guaranteed monotonic convergence. The algorithm is based on a technique known as majorization. (SLD)

  8. Evolutionary development of path planning algorithms

    SciTech Connect

    Hage, M

    1998-09-01

    This paper describes the use of evolutionary software techniques for developing both genetic algorithms and genetic programs. Genetic algorithms are evolved to solve a specific problem within a fixed and known environment. While genetic algorithms can evolve to become very optimized for their task, they often are very specialized and perform poorly if the environment changes. Genetic programs are evolved through simultaneous training in a variety of environments to develop a more general controller behavior that operates in unknown environments. Performance of genetic programs is less optimal than a specially bred algorithm for an individual environment, but the controller performs acceptably under a wider variety of circumstances. The example problem addressed in this paper is evolutionary development of algorithms and programs for path planning in nuclear environments, such as Chernobyl.

  9. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  10. Basic firefly algorithm for document clustering

    NASA Astrophysics Data System (ADS)

    Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza

    2015-12-01

    The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).

  11. Algorithm refinement for the stochastic Burgers' equation

    SciTech Connect

    Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org

    2007-04-10

    In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.

  12. The theory of hybrid stochastic algorithms

    SciTech Connect

    Kennedy, A.D. . Supercomputer Computations Research Inst.)

    1989-11-21

    These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs.

  13. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  14. Self-adaptive parameters in genetic algorithms

    NASA Astrophysics Data System (ADS)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    Genetic algorithms are powerful search algorithms that can be applied to a wide range of problems. Generally, parameter setting is accomplished prior to running a Genetic Algorithm (GA) and this setting remains unchanged during execution. The problem of interest to us here is the self-adaptive parameters adjustment of a GA. In this research, we propose an approach in which the control of a genetic algorithm"s parameters can be encoded within the chromosome of each individual. The parameters" values are entirely dependent on the evolution mechanism and on the problem context. Our preliminary results show that a GA is able to learn and evaluate the quality of self-set parameters according to their degree of contribution to the resolution of the problem. These results are indicative of a promising approach to the development of GAs with self-adaptive parameter settings that do not require the user to pre-adjust parameters at the outset.

  15. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  16. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  17. Acceleration of iterative image restoration algorithms.

    PubMed

    Biggs, D S; Andrews, M

    1997-03-10

    A new technique for the acceleration of iterative image restoration algorithms is proposed. The method is based on the principles of vector extrapolation and does not require the minimization of a cost function. The algorithm is derived and its performance illustrated with Richardson-Lucy (R-L) and maximum entropy (ME) deconvolution algorithms and the Gerchberg-Saxton magnitude and phase retrieval algorithms. Considerable reduction in restoration times is achieved with little image distortion or computational overhead per iteration. The speedup achieved is shown to increase with the number of iterations performed and is easily adapted to suit different algorithms. An example R-L restoration achieves an average speedup of 40 times after 250 iterations and an ME method 20 times after only 50 iterations. An expression for estimating the acceleration factor is derived and confirmed experimentally. Comparisons with other acceleration techniques in the literature reveal significant improvements in speed and stability. PMID:18250863

  18. Smooth transitions between bump rendering algorithms

    SciTech Connect

    Becker, B.G. Max, N.L. |

    1993-01-04

    A method is described for switching smoothly between rendering algorithms as required by the amount of visible surface detail. The result will be more realism with less computation for displaying objects whose surface detail can be described by one or more bump maps. The three rendering algorithms considered are bidirectional reflection distribution function (BRDF), bump-mapping, and displacement-mapping. The bump-mapping has been modified to make it consistent with the other two. For a given viewpoint, one of these algorithms will show a better trade-off between quality, computation time, and aliasing than the other two. Thus, it needs to be determined for any given viewpoint which regions of the object(s) will be rendered with each algorithm The decision as to which algorithm is appropriate is a function of distance, viewing angle, and the frequency of bumps in the bump map.

  19. Univariate time series forecasting algorithm validation

    NASA Astrophysics Data System (ADS)

    Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan

    2014-12-01

    Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.

  20. Intelligent perturbation algorithms for space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.

  1. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  2. A parallel algorithm for mesh smoothing

    SciTech Connect

    Freitag, L.; Jones, M.; Plassmann, P.

    1999-07-01

    Maintaining good mesh quality during the generation and refinement of unstructured meshes in finite-element applications is an important aspect in obtaining accurate discretizations and well-conditioned linear systems. In this article, the authors present a mesh-smoothing algorithm based on nonsmooth optimization techniques and a scalable implementation of this algorithm. They prove that the parallel algorithm has a provably fast runtime bound and executes correctly for a parallel random access machine (PRAM) computational model. They extend the PRAM algorithm to distributed memory computers and report results for two-and three-dimensional simplicial meshes that demonstrate the efficiency and scalability of this approach for a number of different test cases. They also examine the effect of different architectures on the parallel algorithm and present results for the IBM SP supercomputer and an ATM-connected network of SPARC Ultras.

  3. Marshall Rosenbluth and the Metropolis algorithm

    SciTech Connect

    Gubernatis, J.E.

    2005-05-15

    The 1953 publication, 'Equation of State Calculations by Very Fast Computing Machines' by N. Metropolis, A. W. Rosenbluth and M. N. Rosenbluth, and M. Teller and E. Teller [J. Chem. Phys. 21, 1087 (1953)] marked the beginning of the use of the Monte Carlo method for solving problems in the physical sciences. The method described in this publication subsequently became known as the Metropolis algorithm, undoubtedly the most famous and most widely used Monte Carlo algorithm ever published. As none of the authors made subsequent use of the algorithm, they became unknown to the large simulation physics community that grew from this publication and their roles in its development became the subject of mystery and legend. At a conference marking the 50th anniversary of the 1953 publication, Marshall Rosenbluth gave his recollections of the algorithm's development. The present paper describes the algorithm, reconstructs the historical context in which it was developed, and summarizes Marshall's recollections.

  4. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  5. Implementation of the phase gradient algorithm

    SciTech Connect

    Wahl, D.E.; Eichel, P.H.; Jakowatz, C.V. Jr.

    1990-01-01

    The recently introduced Phase Gradient Autofocus (PGA) algorithm is a non-parametric autofocus technique which has been shown to be quite effective for phase correction of Synthetic Aperture Radar (SAR) imagery. This paper will show that this powerful algorithm can be executed at near real-time speeds and also be implemented in a relatively small piece of hardware. A brief review of the PGA will be presented along with an overview of some critical implementation considerations. In addition, a demonstration of the PGA algorithm running on a 7 in. {times} 10 in. printed circuit board containing a TMS320C30 digital signal processing (DSP) chip will be given. With this system, using only the 20 range bins which contain the brightest points in the image, the algorithm can correct a badly degraded 256 {times} 256 image in as little as 3 seconds. Using all range bins, the algorithm can correct the image in 9 seconds. 4 refs., 2 figs.

  6. Algorithms for improved performance in cryptographic protocols.

    SciTech Connect

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  7. A new algorithm for coding geological terminology

    NASA Astrophysics Data System (ADS)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  8. Improving the algorithm of temporal relation propagation

    NASA Astrophysics Data System (ADS)

    Shen, Jifeng; Xu, Dan; Liu, Tongming

    2005-03-01

    In the military Multi Agent System, every agent needs to analyze the temporal relationships among the tasks or combat behaviors, and it"s very important to reflect the battlefield situation in time. The temporal relation among agents is usually very complex, and we model it with interval algebra (IA) network. Therefore an efficient temporal reasoning algorithm is vital in battle MAS model. The core of temporal reasoning is path consistency algorithm, an efficient path consistency algorithm is necessary. In this paper we used the Interval Matrix Calculus (IMC) method to represent the temporal relation, and optimized the path consistency algorithm by improving the efficiency of propagation of temporal relation based on the Allen's path consistency algorithm.

  9. Localization Algorithms of Underwater Wireless Sensor Networks: A Survey

    PubMed Central

    Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng

    2012-01-01

    In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs. PMID:22438752

  10. GOES-R Algorithm Working Group (AWG)

    NASA Astrophysics Data System (ADS)

    Daniels, Jaime; Goldberg, Mitch; Wolf, Walter; Zhou, Lihang; Lowe, Kenneth

    2009-08-01

    For the next-generation of GOES-R instruments to meet stated performance requirements, state-of-the-art algorithms will be needed to convert raw instrument data to calibrated radiances and derived geophysical parameters (atmosphere, land, ocean, and space weather). The GOES-R Program Office (GPO) assigned the NOAA/NESDIS Center for Satellite Research and Applications (STAR) the responsibility for technical leadership and management of GOES-R algorithm development and calibration/validation. STAR responded with the creation of the GOES-R Algorithm Working Group (AWG) to manage and coordinate development and calibration/validation activities for GOES-R proxy data and geophysical product algorithms. The AWG consists of 15 application teams that bring expertise in product algorithms that span atmospheric, land, oceanic, and space weather disciplines. Each AWG teams will develop new scientific Level- 2 algorithms for GOES-R and will also leverage science developments from other communities (other government agencies, universities and industry), and heritage approaches from current operational GOES and POES product systems. All algorithms will be demonstrated and validated in a scalable operational demonstration environment. All software developed by the AWG will adhere to new standards established within NOAA/NESDIS. The AWG Algorithm Integration Team (AIT) has the responsibility for establishing the system framework, integrating the product software from each team into this framework, enforcing the established software development standards, and preparing system deliveries. The AWG will deliver an Algorithm Theoretical Basis Document (ATBD) for each GOES-R geophysical product as well as Delivered Algorithm Packages (DAPs) to the GPO.

  11. Exploration of new multivariate spectral calibration algorithms.

    SciTech Connect

    Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.

    2004-03-01

    A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.

  12. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  13. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  14. Concurrent constant modulus algorithm and multi-modulus algorithm scheme for high-order QAM signals

    NASA Astrophysics Data System (ADS)

    Rao, Wei

    2011-10-01

    In order to overcome the slow convergence rate and large steady-state mean square error of constant modulus algorithm (CMA), a concurrent constant modulus algorithm and multi-modulus algorithm scheme for high-order QAM signals is proposed, which makes full use of the character which is that the high-order QAM signals locate in the different modulus. This algorithm uses the CMA as the basal mode. And in the second mode it uses the multi-modulus algorithm. Furthermore, the two modes operate concurrently. The efficiency of the method is proved by computer simulations in underwater acoustic channels.

  15. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  16. ALGORITHM FOR SORTING GROUPED DATA

    NASA Technical Reports Server (NTRS)

    Evans, J. D.

    1994-01-01

    It is often desirable to sort data sets in ascending or descending order. This becomes more difficult for grouped data, i.e., multiple sets of data, where each set of data involves several measurements or related elements. The sort becomes increasingly cumbersome when more than a few elements exist for each data set. In order to achieve an efficient sorting process, an algorithm has been devised in which the maximum most significant element is found, and then compared to each element in succession. The program was written to handle the daily temperature readings of the Voyager spacecraft, particularly those related to the special tracking requirements of Voyager 2. By reducing each data set to a single representative number, the sorting process becomes very easy. The first step in the process is to reduce the data set of width 'n' to a data set of width '1'. This is done by representing each data set by a polynomial of length 'n' based on the differences of the maximum and minimum elements. These single numbers are then sorted and converted back to obtain the original data sets. Required input data are the name of the data file to read and sort, and the starting and ending record numbers. The package includes a sample data file, containing 500 sets of data with 5 elements in each set. This program will perform a sort of the 500 data sets in 3 - 5 seconds on an IBM PC-AT with a hard disk; on a similarly equipped IBM PC-XT the time is under 10 seconds. This program is written in BASIC (specifically the Microsoft QuickBasic compiler) for interactive execution and has been implemented on the IBM PC computer series operating under PC-DOS with a central memory requirement of approximately 40K of 8 bit bytes. A hard disk is desirable for speed considerations, but is not required. This program was developed in 1986.

  17. Stereotactic Ablative Radiation Therapy for Subcentimeter Lung Tumors: Clinical, Dosimetric, and Image Guidance Considerations

    SciTech Connect

    Louie, Alexander V.; Senan, Suresh; Dahele, Max; Slotman, Ben J.; Verbakel, Wilko F.A.R.

    2014-11-15

    Purpose: Use of stereotactic ablative radiation therapy (SABR) for subcentimeter lung tumors is controversial. We report our outcomes for tumors with diameter ≤1 cm and their visibility on cone beam computed tomography (CBCT) scans and retrospectively evaluate the planned dose using a deterministic dose calculation algorithm (Acuros XB [AXB]). Methods and Materials: We identified subcentimeter tumors from our institutional SABR database. Tumor size was remeasured on an artifact-free phase of the planning 4-dimensional (4D)-CT. Clinical plan doses were generated using either a pencil beam convolution or an anisotropic analytic algorithm (AAA). All AAA plans were recalculated using AXB, and differences among D95 and mean dose for internal target volume (ITV) and planning target volume (PTV) on the average intensity CT dataset, as well as for gross tumor volume (GTV) on the end respiratory phases were reported. For all AAA patients, CBCT scans acquired during each treatment fraction were evaluated for target visibility. Progression-free and overall survival rates were calculated using the Kaplan-Meier method. Results: Thirty-five patients with 37 subcentimeter tumors were eligible for analysis. For the 22 AAA plans recalculated using AXB, Mean D95 ± SD values were 2.2 ± 4.4% (ITV) and 2.5 ± 4.8% (PTV) lower using AXB; whereas mean doses were 2.9 ± 4.9% (ITV) and 3.7 ± 5.1% (PTV) lower. Calculated AXB doses were significantly lower in one patient (difference in mean ITV and PTV doses, as well as in mean ITV and PTV D95 ranged from 22%-24%). However, the end respiratory phase GTV received at least 95% of the prescription dose. Review of 92 CBCT scans from all AAA patients revealed that the tumor was visualized in 82 images, and its position could be inferred in other images. The 2-year local progression-free survival was 100%. Conclusions: Patients with subcentimeter lung tumors are good candidates for SABR, given the dosimetry, ability to localize

  18. Updated treatment algorithm of pulmonary arterial hypertension.

    PubMed

    Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne

    2013-12-24

    The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643

  19. Algorithm for dynamic Speckle pattern processing

    NASA Astrophysics Data System (ADS)

    Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.

    2016-07-01

    In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.

  20. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels. PMID:24212035

  1. Passive MMW algorithm performance characterization using MACET

    NASA Astrophysics Data System (ADS)

    Williams, Bradford D.; Watson, John S.; Amphay, Sengvieng A.

    1997-06-01

    As passive millimeter wave sensor technology matures, algorithms which are tailored to exploit the benefits of this technology are being developed. The expedient development of such algorithms requires an understanding of not only the gross phenomenology, but also specific quirks and limitations inherent in sensors and the data gathering methodology specific to this regime. This level of understanding is approached as the technology matures and increasing amounts of data become available for analysis. The Armament Directorate of Wright Laboratory, WL/MN, has spearheaded the advancement of passive millimeter-wave technology in algorithm development tools and modeling capability as well as sensor development. A passive MMW channel is available within WL/MNs popular multi-channel modeling program Irma, and a sample passive MMW algorithm is incorporated into the Modular Algorithm Concept Evaluation Tool, an algorithm development and evaluation system. The Millimeter Wave Analysis of Passive Signatures system provides excellent data collection capability in the 35, 60, and 95 GHz MMW bands. This paper exploits these assets for the study of the PMMW signature of a High Mobility Multi- Purpose Wheeled Vehicle in the three bands mentioned, and the effect of camouflage upon this signature and autonomous target recognition algorithm performance.

  2. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  3. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  4. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  5. Fast ordering algorithm for exact histogram specification.

    PubMed

    Nikolova, Mila; Steidl, Gabriele

    2014-12-01

    This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881

  6. LCD motion blur: modeling, analysis, and algorithm.

    PubMed

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596

  7. A new algorithm for attitude-independent magnetometer calibration

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Shuster, Malcolm D.

    1994-01-01

    A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.

  8. Quantum adiabatic algorithm for factorization and its experimental implementation.

    PubMed

    Peng, Xinhua; Liao, Zeyang; Xu, Nanyang; Qin, Gan; Zhou, Xianyi; Suter, Dieter; Du, Jiangfeng

    2008-11-28

    We propose an adiabatic quantum algorithm capable of factorizing numbers, using fewer qubits than Shor's algorithm. We implement the algorithm in a NMR quantum information processor and experimentally factorize the number 21. In the range that our classical computer could simulate, the quantum adiabatic algorithm works well, providing evidence that the running time of this algorithm scales polynomially with the problem size. PMID:19113467

  9. SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.

    1989-01-01

    The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.

  10. Phase unwrapping algorithms in laser propagation simulation

    NASA Astrophysics Data System (ADS)

    Du, Rui; Yang, Lijia

    2013-08-01

    Currently simulating on laser propagation in atmosphere usually need to deal with beam in strong turbulence, which may lose a part of information via Fourier Transform to simulate the transmission, makes the phase of beam as a 2-D array wrap by 2π . An effective unwrapping algorithm is needed for continuing result and faster calculation. The unwrapping algorithms in atmospheric propagation are similar to the unwrapping algorithm in radar or 3-D surface rebuilding, but not the same. In this article, three classic unwrapping algorithms: the block least squares (BLS), mask-cut (MCUT), and the Flynn's minimal discontinuity algorithm (FMD) are tried in wave-front reconstruction simulation. Each of those algorithms are tested 100 times in 6 same conditions, including low(64x64), medium(128x128), and high(256x256) resolution phase array, with and without noises. Compared the results, the conclusions are delivered as follows. The BLS-based algorithm is the fastest, and the result is acceptable in low resolution environment without noise. The MCUT are higher in accuracy, though they are slower with the array resolution increased, and it is sensitive to noise, resulted in large area errors. Flynn's algorithm has the better accuracy, and it occupies large memory in calculation. After all, the article delivered a new algorithm that based on Active on Vertex (AOV) Network, to build a logical graph to cut the search space then find minimal discontinuity solution. The AOV is faster than MCUT in dealing with high resolution phase arrays, and better accuracy as FMD that has been tested.

  11. A survey of DNA motif finding algorithms

    PubMed Central

    Das, Modan K; Dai, Ho-Kwok

    2007-01-01

    Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of

  12. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  13. Comparative Study of Two Automatic Registration Algorithms

    NASA Astrophysics Data System (ADS)

    Grant, D.; Bethel, J.; Crawford, M.

    2013-10-01

    The Iterative Closest Point (ICP) algorithm is prevalent for the automatic fine registration of overlapping pairs of terrestrial laser scanning (TLS) data. This method along with its vast number of variants, obtains the least squares parameters that are necessary to align the TLS data by minimizing some distance metric between the scans. The ICP algorithm uses a "model-data" concept in which the scans obtain differential treatment in the registration process depending on whether they were assigned to be the "model" or "data". For each of the "data" points, corresponding points from the "model" are sought. Another concept of "symmetric correspondence" was proposed in the Point-to-Plane (P2P) algorithm, where both scans are treated equally in the registration process. The P2P method establishes correspondences on both scans and minimizes the point-to-plane distances between the scans by simultaneously considering the stochastic properties of both scans. This paper studies both the ICP and P2P algorithms in terms of their consistency in registration parameters for pairs of TLS data. The question being investigated in this paper is, should scan A be registered to scan B, will the parameters be the same if scan B were registered to scan A? Experiments were conducted with eight pairs of real TLS data which were registered by the two algorithms in the forward (scan A to scan B) and backward (scan B to scan A) modes and the results were compared. The P2P algorithm was found to be more consistent than the ICP algorithm. The differences in registration accuracy between the forward and backward modes were negligible when using the P2P algorithm (mean difference of 0.03 mm). However, the ICP had a mean difference of 4.26 mm. Each scan was also transformed by the forward and backward parameters of the two algorithms and the misclosure computed. The mean misclosure for the P2P algorithm was 0.80 mm while that for the ICP algorithm was 5.39 mm. The conclusion from this study is

  14. Fast decoding algorithms for coded aperture systems

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    2014-08-01

    Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques.

  15. Algorithms for optimal dyadic decision trees

    SciTech Connect

    Hush, Don; Porter, Reid

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  16. Quantum hyperparallel algorithm for matrix multiplication

    NASA Astrophysics Data System (ADS)

    Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan

    2016-04-01

    Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N2), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and “big data” analysis.

  17. Protein Structure Prediction with Evolutionary Algorithms

    SciTech Connect

    Hart, W.E.; Krasnogor, N.; Pelta, D.A.; Smith, J.

    1999-02-08

    Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.

  18. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  19. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-08-30

    We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  20. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-02-28

    We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  1. Quantum algorithms for quantum field theories.

    PubMed

    Jordan, Stephen P; Lee, Keith S M; Preskill, John

    2012-06-01

    Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm. PMID:22654052

  2. System engineering approach to GPM retrieval algorithms

    SciTech Connect

    Rose, C. R.; Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do

  3. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  4. Quantum hyperparallel algorithm for matrix multiplication.

    PubMed

    Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan

    2016-01-01

    Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N(2)), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and "big data" analysis. PMID:27125586

  5. An Overview of LISA Data Analysis Algorithms

    NASA Astrophysics Data System (ADS)

    Porter, Edward K.

    2009-07-01

    The development of search algorithms for gravitational wave sources in the LISA data stream is currently a very active area of research. It has become clear that not only does difficulty lie in searching for the individual sources, but in the case of galactic binaries, evaluating the fidelity of resolved sources also turns out to be a major challenge in itself. In this article we review the current status of developed algorithms for galactic binary, non-spinning supermassive black hole binary and extreme mass ratio inspiral sources. While covering the vast majority of algorithms, we will highlight those that represent the state of the art in terms of speed and accuracy.

  6. Algorithms for computing the multivariable stability margin

    NASA Technical Reports Server (NTRS)

    Tekawy, Jonathan A.; Safonov, Michael G.; Chiang, Richard Y.

    1989-01-01

    Stability margin for multiloop flight control systems has become a critical issue, especially in highly maneuverable aircraft designs where there are inherent strong cross-couplings between the various feedback control loops. To cope with this issue, we have developed computer algorithms based on non-differentiable optimization theory. These algorithms have been developed for computing the Multivariable Stability Margin (MSM). The MSM of a dynamical system is the size of the smallest structured perturbation in component dynamics that will destabilize the system. These algorithms have been coded and appear to be reliable. As illustrated by examples, they provide the basis for evaluating the robustness and performance of flight control systems.

  7. Some multigrid algorithms for SIMD machines

    SciTech Connect

    Dendy, J.E. Jr.

    1996-12-31

    Previously a semicoarsening multigrid algorithm suitable for use on SIMD architectures was investigated. Through the use of new software tools, the performance of this algorithm has been considerably improved. The method has also been extended to three space dimensions. The method performs well for strongly anisotropic problems and for problems with coefficients jumping by orders of magnitude across internal interfaces. The parallel efficiency of this method is analyzed, and its actual performance on the CM-5 is compared with its performance on the CRAY-YMP. A standard coarsening multigrid algorithm is also considered, and we compare its performance on these two platforms as well.

  8. Reactive power optimization by genetic algorithm

    SciTech Connect

    Iba, Kenji )

    1994-05-01

    This paper presents a new approach to optimal reactive power planning based on a genetic algorithm. Many outstanding methods to this problem have been proposed in the past. However, most of these approaches have the common defect of being caught to a local minimum solution. The integer problem which yields integer value solutions for discrete controllers/banks still remains as a difficult one. The genetic algorithm is a kind of search algorithm based on the mechanics of natural selection and genetics. This algorithm can search for a global solution using multiple paths and treat integer problems naturally. The proposed method was applied to practical 51-bus and 224-bus systems to show its feasibility and capabilities. Although this method is not as fast as sophisticated traditional methods, the concept is quite promising and useful.

  9. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  10. Genetic algorithms at UC Davis/LLNL

    SciTech Connect

    Vemuri, V.R.

    1993-12-31

    A tutorial introduction to genetic algorithms is given. This brief tutorial should serve the purpose of introducing the subject to the novice. The tutorial is followed by a brief commentary on the term project reports that follow.

  11. An Augmentation of G-Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Carson, John M. III; Acikmese, Behcet

    2011-01-01

    The original G-Guidance algorithm provided an autonomous guidance and control policy for small-body proximity operations that took into account uncertainty and dynamics disturbances. However, there was a lack of robustness in regards to object proximity while in autonomous mode. The modified GGuidance algorithm was augmented with a second operational mode that allows switching into a safety hover mode. This will cause a spacecraft to hover in place until a mission-planning algorithm can compute a safe new trajectory. No state or control constraints are violated. When a new, feasible state trajectory is calculated, the spacecraft will return to standard mode and maneuver toward the target. The main goal of this augmentation is to protect the spacecraft in the event that a landing surface or obstacle is closer or further than anticipated. The algorithm can be used for the mitigation of any unexpected trajectory or state changes that occur during standard mode operations

  12. Line-drawing algorithms for parallel machines

    NASA Technical Reports Server (NTRS)

    Pang, Alex T.

    1990-01-01

    The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.

  13. Advanced Imaging Algorithms for Radiation Imaging Systems

    SciTech Connect

    Marleau, Peter

    2015-10-01

    The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.

  14. Swarm-based algorithm for phase unwrapping.

    PubMed

    da Silva Maciel, Lucas; Albertazzi, Armando G

    2014-08-20

    A novel algorithm for phase unwrapping based on swarm intelligence is proposed. The algorithm was designed based on three main goals: maximum coverage of reliable information, focused effort for better efficiency, and reliable unwrapping. Experiments were performed, and a new agent was designed to follow a simple set of five rules in order to collectively achieve these goals. These rules consist of random walking for unwrapping and searching, ambiguity evaluation by comparing unwrapped regions, and a replication behavior responsible for the good distribution of agents throughout the image. The results were comparable with the results from established methods. The swarm-based algorithm was able to suppress ambiguities better than the flood-fill algorithm without relying on lengthy processing times. In addition, future developments such as parallel processing and better-quality evaluation present great potential for the proposed method. PMID:25321125

  15. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  16. Fluid-structure-coupling algorithm. [BWR

    SciTech Connect

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D.

  17. Geometric direct search algorithms for image registration.

    PubMed

    Lee, Seok; Choi, Minseok; Kim, Hyungmin; Park, Frank Chongwoo

    2007-09-01

    A widely used approach to image registration involves finding the general linear transformation that maximizes the mutual information between two images, with the transformation being rigid-body [i.e., belonging to SE(3)] or volume-preserving [i.e., belonging to SL(3)]. In this paper, we present coordinate-invariant, geometric versions of the Nelder-Mead optimization algorithm on the groups SL(3), SE(3), and their various subgroups, that are applicable to a wide class of image registration problems. Because the algorithms respect the geometric structure of the underlying groups, they are numerically more stable, and exhibit better convergence properties than existing local coordinate-based algorithms. Experimental results demonstrate the improved convergence properties of our geometric algorithms. PMID:17784595

  18. Implementing Shor's algorithm on Josephson charge qubits

    SciTech Connect

    Vartiainen, Juha J.; Salomaa, Martti M.; Niskanen, Antti O.; Nakahara, Mikio

    2004-07-01

    We investigate the physical implementation of Shor's factorization algorithm on a Josephson charge qubit register. While we pursue a universal method to factor a composite integer of any size, the scheme is demonstrated for the number 21. We consider both the physical and algorithmic requirements for an optimal implementation when only a small number of qubits are available. These aspects of quantum computation are usually the topics of separate research communities; we present a unifying discussion of both of these fundamental features bridging Shor's algorithm to its physical realization using Josephson junction qubits. In order to meet the stringent requirements set by a short decoherence time, we accelerate the algorithm by decomposing the quantum circuit into tailored two- and three-qubit gates and we find their physical realizations through numerical optimization.

  19. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  20. Non-Manhattan layout extraction algorithm

    NASA Astrophysics Data System (ADS)

    Satkhozhina, Aziza; Ahmadullin, Ildus; Allebach, Jan P.; Lin, Qian; Liu, Jerry; Tretter, Daniel; O'Brien-Strain, Eamonn; Hunter, Andrew

    2013-03-01

    Automated publishing requires large databases containing document page layout templates. The number of layout templates that need to be created and stored grows exponentially with the complexity of the document layouts. A better approach for automated publishing is to reuse layout templates of existing documents for the generation of new documents. In this paper, we present an algorithm for template extraction from a docu- ment page image. We use the cost-optimized segmentation algorithm (COS) to segment the image, and Voronoi decomposition to cluster the text regions. Then, we create a block image where each block represents a homo- geneous region of the document page. We construct a geometrical tree that describes the hierarchical structure of the document page. We also implement a font recognition algorithm to analyze the font of each text region. We present a detailed description of the algorithm and our preliminary results.