NASA Astrophysics Data System (ADS)
Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca
2011-03-01
A new algorithm, Acuros® XB Advanced Dose Calculation, has been introduced by Varian Medical Systems in the Eclipse planning system for photon dose calculation in external radiotherapy. Acuros XB is based on the solution of the linear Boltzmann transport equation (LBTE). The LBTE describes the macroscopic behaviour of radiation particles as they travel through and interact with matter. The implementation of Acuros XB in Eclipse has not been assessed; therefore, it is necessary to perform these pre-clinical validation tests to determine its accuracy. This paper summarizes the results of comparisons of Acuros XB calculations against measurements and calculations performed with a previously validated dose calculation algorithm, the Anisotropic Analytical Algorithm (AAA). The tasks addressed in this paper are limited to the fundamental characterization of Acuros XB in water for simple geometries. Validation was carried out for four different beams: 6 and 15 MV beams from a Varian Clinac 2100 iX, and 6 and 10 MV 'flattening filter free' (FFF) beams from a TrueBeam linear accelerator. The TrueBeam FFF are new beams recently introduced in clinical practice on general purpose linear accelerators and have not been previously reported on. Results indicate that Acuros XB accurately reproduces measured and calculated (with AAA) data and only small deviations were observed for all the investigated quantities. In general, the overall degree of accuracy for Acuros XB in simple geometries can be stated to be within 1% for open beams and within 2% for mechanical wedges. The basic validation of the Acuros XB algorithm was therefore considered satisfactory for both conventional photon beams as well as for FFF beams of new generation linacs such as the Varian TrueBeam.
A phantom study on the behavior of Acuros XB algorithm in flattening filter free photon beams
Muralidhar, K. R.; Pangam, Suresh; Srinivas, P.; Athar Ali, Mirza; Priya, V. Sujana; Komanduri, Krishna
2015-01-01
To study the behavior of Acuros XB algorithm for flattening filter free (FFF) photon beams in comparison with the anisotropic analytical algorithm (AAA) when applied to homogeneous and heterogeneous phantoms in conventional and RapidArc techniques. Acuros XB (Eclipse version 10.0, Varian Medical Systems, CA, USA) and AAA algorithms were used to calculate dose distributions for both 6X FFF and 10X FFF energies. RapidArc plans were created on Catphan phantom 504 and conventional plans on virtual homogeneous water phantom 30 × 30 × 30 cm3, virtual heterogeneous phantom with various inserts and on solid water phantom with air cavity. Dose at various inserts with different densities were measured in both AAA and Acuros algorithms. The maximum % variation in dose was observed in (−944 HU) air insert and minimum in (85 HU) acrylic insert in both 6X FFF and 10X FFF photons. Less than 1% variation observed between −149 HU and 282 HU for both energies. At −40 HU and 765 HU Acuros behaved quite contrarily with 10X FFF. Maximum % variation in dose was observed in less HU values and minimum variation in higher HU values for both FFF energies. Global maximum dose observed at higher depths for Acuros for both energies compared with AAA. Increase in dose was observed with Acuros algorithm in almost all densities and decrease at few densities ranging from 282 to 643 HU values. Field size, depth, beam energy, and material density influenced the dose difference between two algorithms. PMID:26500400
NASA Astrophysics Data System (ADS)
Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca
2011-05-01
This corrigendum intends to clarify some important points that were not clearly or properly addressed in the original paper, and for which the authors apologize. The original description of the first Acuros algorithm is from the developers, published in Physics in Medicine and Biology by Vassiliev et al (2010) in the paper entitled 'Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams'. The main equations describing the algorithm reported in our paper, implemented as the 'Acuros XB Advanced Dose Calculation Algorithm' in the Varian Eclipse treatment planning system, were originally described (for the original Acuros algorithm) in the above mentioned paper by Vassiliev et al. The intention of our description in our paper was to give readers an overview of the algorithm, not pretending to have authorship of the algorithm itself (used as implemented in the planning system). Unfortunately our paper was not clear, particularly in not allocating full credit to the work published by Vassiliev et al on the original Acuros algorithm. Moreover, it is important to clarify that we have not adapted any existing algorithm, but have used the Acuros XB implementation in the Eclipse planning system from Varian. In particular, the original text of our paper should have been as follows: On page 1880 the sentence 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008, 2010). Acuros XB builds upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were adapted especially for external photon beam dose calculations' should be corrected to 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008). A new algorithm called Acuros, developed by the Transpire Inc. group, was
Khan, Rao F. Villarreal-Barajas, Eduardo; Lau, Harold; Liu, Hong-Wei
2014-04-01
Stereotactic body radiotherapy (SBRT) is a curative regimen that uses hypofractionated radiation-absorbed dose to achieve a high degree of local control in early stage non–small cell lung cancer (NSCLC). In the presence of heterogeneities, the dose calculation for the lungs becomes challenging. We have evaluated the dosimetric effect of the recently introduced advanced dose-calculation algorithm, Acuros XB (AXB), for SBRT of NSCLC. A total of 97 patients with early-stage lung cancer who underwent SBRT at our cancer center during last 4 years were included. Initial clinical plans were created in Aria Eclipse version 8.9 or prior, using 6 to 10 fields with 6-MV beams, and dose was calculated using the anisotropic analytic algorithm (AAA) as implemented in Eclipse treatment planning system. The clinical plans were recalculated in Aria Eclipse 11.0.21 using both AAA and AXB algorithms. Both sets of plans were normalized to the same prescription point at the center of mass of the target. A secondary monitor unit (MU) calculation was performed using commercial program RadCalc for all of the fields. For the planning target volumes ranging from 19 to 375 cm{sup 3}, a comparison of MUs was performed for both set of algorithms on field and plan basis. In total, variation of MUs for 677 treatment fields was investigated in terms of equivalent depth and the equivalent square of the field. Overall, MUs required by AXB to deliver the prescribed dose are on an average 2% higher than AAA. Using a 2-tailed paired t-test, the MUs from the 2 algorithms were found to be significantly different (p < 0.001). The secondary independent MU calculator RadCalc underestimates the required MUs (on an average by 4% to 5%) in the lung relative to either of the 2 dose algorithms.
Han Tao; Followill, David; Repchak, Roman; Molineu, Andrea; Howell, Rebecca; Salehpour, Mohammad; Mikell, Justin; Mourtada, Firas
2013-05-15
Purpose: The novel deterministic radiation transport algorithm, Acuros XB (AXB), has shown great potential for accurate heterogeneous dose calculation. However, the clinical impact between AXB and other currently used algorithms still needs to be elucidated for translation between these algorithms. The purpose of this study was to investigate the impact of AXB for heterogeneous dose calculation in lung cancer for intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The thorax phantom from the Radiological Physics Center (RPC) was used for this study. IMRT and VMAT plans were created for the phantom in the Eclipse 11.0 treatment planning system. Each plan was delivered to the phantom three times using a Varian Clinac iX linear accelerator to ensure reproducibility. Thermoluminescent dosimeters (TLDs) and Gafchromic EBT2 film were placed inside the phantom to measure delivered doses. The measurements were compared with dose calculations from AXB 11.0.21 and the anisotropic analytical algorithm (AAA) 11.0.21. Two dose reporting modes of AXB, dose-to-medium in medium (D{sub m,m}) and dose-to-water in medium (D{sub w,m}), were studied. Point doses, dose profiles, and gamma analysis were used to quantify the agreement between measurements and calculations from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: For the RPC lung phantom, AAA and AXB dose predictions were found in good agreement to TLD and film measurements for both IMRT and VMAT plans. TLD dose predictions were within 0.4%-4.4% to AXB doses (both D{sub m,m} and D{sub w,m}); and within 2.5%-6.4% to AAA doses, respectively. For the film comparisons, the gamma indexes ({+-}3%/3 mm criteria) were 94%, 97%, and 98% for AAA, AXB{sub Dm,m}, and AXB{sub Dw,m}, respectively. The differences between AXB and AAA in dose-volume histogram mean doses were within 2% in the planning target volume, lung, heart, and within 5% in the spinal cord
Araki, F; Onizuka, R; Ohno, T; Tomiyama, Y; Hioki, K
2014-06-01
Purpose: To investigate the accuracy of the Acuros XB version 11 (AXB11) advanced dose calculation algorithm by comparing with Monte Caro (MC) calculations. The comparisons were performed with dose distributions for a virtual inhomogeneity phantom and intensity-modulated radiotherapy (IMRT) in head and neck. Methods: Recently, AXB based on Linear Boltzmann Transport Equation has been installed in the Eclipse treatment planning system (Varian Medical Oncology System, USA). The dose calculation accuracy of AXB11 was tested by the EGSnrc-MC calculations. In additions, AXB version 10 (AXB10) and Analytical Anisotropic Algorithm (AAA) were also used. First the accuracy of an inhomogeneity correction for AXB and AAA algorithms was evaluated by comparing with MC-calculated dose distributions for a virtual inhomogeneity phantom that includes water, bone, air, adipose, muscle, and aluminum. Next the IMRT dose distributions for head and neck were compared with the AXB and AAA algorithms and MC by means of dose volume histograms and three dimensional gamma analysis for each structure (CTV, OAR, etc.). Results: For dose distributions with the virtual inhomogeneity phantom, AXB was in good agreement with those of MC, except the dose in air region. The dose in air region decreased in order of MC
SU-E-T-67: Clinical Implementation and Evaluation of the Acuros Dose Calculation Algorithm
Yan, C; Combine, T; Dickens, K; Wynn, R; Pavord, D; Huq, M
2014-06-01
Purpose: The main aim of the current study is to present a detailed description of the implementation of the Acuros XB Dose Calculation Algorithm, and subsequently evaluate its clinical impacts by comparing it with AAA algorithm. Methods: The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were evaluated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6cm × 6cm to 40cm × 40cm. Central axis and off-axis points with different depths were chosen for the comparison. Similarly, wedge fields with wedge angles from 15 to 60 degree were used. In addition, variable field sizes for a heterogeneous phantom were used to evaluate the Acuros algorithm. Finally, both Acuros and AAA were tested on VMAT patient plans for various sites. Does distributions and calculation time were compared. Results: On average, computation time is reduced by at least 50% by Acuros XB compared with AAA on single fields and VMAT plans. When used for open 6MV photon beams on homogeneous water phantom, both Acuros XB and AAA calculated doses were within 1% of measurement. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. When heterogeneous phantom was used, Acuros XB also improved on accuracy. Conclusion: Compared with AAA, Acuros XB can improve accuracy while significantly reduce computation time for VMAT plans.
Kan, Monica W.K.; Leung, Lucullus H.T.; Yu, Peter K.N.
2013-01-01
Purpose: To assess the dosimetric implications for the intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy with RapidArc (RA) of nasopharyngeal carcinomas (NPC) due to the use of the Acuros XB (AXB) algorithm versus the anisotropic analytical algorithm (AAA). Methods and Materials: Nine-field sliding window IMRT and triple-arc RA plans produced for 12 patients with NPC using AAA were recalculated using AXB. The dose distributions to multiple planning target volumes (PTVs) with different prescribed doses and critical organs were compared. The PTVs were separated into components in bone, air, and tissue. The change of doses by AXB due to air and bone, and the variation of the amount of dose changes with number of fields was also studied using simple geometric phantoms. Results: Using AXB instead of AAA, the averaged mean dose to PTV{sub 70} (70 Gy was prescribed to PTV{sub 70}) was found to be 0.9% and 1.2% lower for IMRT and RA, respectively. It was approximately 1% lower in tissue, 2% lower in bone, and 1% higher in air. The averaged minimum dose to PTV{sub 70} in bone was approximately 4% lower for both IMRT and RA, whereas it was approximately 1.5% lower for PTV{sub 70} in tissue. The decrease in target doses estimated by AXB was mostly contributed from the presence of bone, less from tissue, and none from air. A similar trend was observed for PTV{sub 60} (60 Gy was prescribed to PTV{sub 60}). The doses to most serial organs were found to be 1% to 3% lower and to other organs 4% to 10% lower for both techniques. Conclusions: The use of the AXB algorithm is highly recommended for IMRT and RapidArc planning for NPC cases.
Han Tao; Mourtada, Firas; Kisling, Kelly; Mikell, Justin; Followill, David; Howell, Rebecca
2012-04-15
Purpose: The purpose of this study was to verify the dosimetric performance of Acuros XB (AXB), a grid-based Boltzmann solver, in intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The Radiological Physics Center (RPC) head and neck (H and N) phantom was used for all calculations and measurements in this study. Clinically equivalent IMRT and VMAT plans were created on the RPC H and N phantom in the Eclipse treatment planning system (version 10.0) by using RPC dose prescription specifications. The dose distributions were calculated with two different algorithms, AXB 11.0.03 and anisotropic analytical algorithm (AAA) 10.0.24. Two dose report modes of AXB were recorded: dose-to-medium in medium (D{sub m,m}) and dose-to-water in medium (D{sub w,m}). Each treatment plan was delivered to the RPC phantom three times for reproducibility by using a Varian Clinac iX linear accelerator. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic registered EBT2 film, respectively. Profile comparison and 2D gamma analysis were used to quantify the agreement between the film measurements and the calculated dose distributions from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: Good agreement was observed between measured doses and those calculated with AAA or AXB. Both AAA and AXB calculated doses within 5% of TLD measurements in both the IMRT and VMAT plans. Results of AXB{sub Dm,m} (0.1% to 3.6%) were slightly better than AAA (0.2% to 4.6%) or AXB{sub Dw,m} (0.3% to 5.1%). The gamma analysis for both AAA and AXB met the RPC 7%/4 mm criteria (over 90% passed), whereas AXB{sub Dm,m} met 5%/3 mm criteria in most cases. AAA was 2 to 3 times faster than AXB for IMRT, whereas AXB was 4-6 times faster than AAA for VMAT. Conclusions: AXB was found to be satisfactorily accurate when compared to measurements in the RPC H and N phantom. Compared with AAA
Cao, M; Tenn, S; Lee, C; Yang, Y; Lamb, J; Agazaryan, N; Lee, P; Low, D
2014-06-01
Purpose: To evaluate performance of three commercially available treatment planning systems for stereotactic body radiation therapy (SBRT) of lung cancer using the following algorithms: Boltzmann transport equation based algorithm (AcurosXB AXB), convolution based algorithm Anisotropic Analytic Algorithm (AAA); and Monte Carlo based algorithm (XVMC). Methods: A total of 10 patients with early stage non-small cell peripheral lung cancer were included. The initial clinical plans were generated using the XVMC based treatment planning system with a prescription of 54Gy in 3 fractions following RTOG0613 protocol. The plans were recalculated with the same beam parameters and monitor units using AAA and AXB algorithms. A calculation grid size of 2mm was used for all algorithms. The dose distribution, conformity, and dosimetric parameters for the targets and organs at risk (OAR) are compared between the algorithms. Results: The average PTV volume was 19.6mL (range 4.2–47.2mL). The volume of PTV covered by the prescribed dose (PTV-V100) were 93.97±2.00%, 95.07±2.07% and 95.10±2.97% for XVMC, AXB and AAA algorithms, respectively. There was no significant difference in high dose conformity index; however, XVMC predicted slightly higher values (p=0.04) for the ratio of 50% prescription isodose volume to PTV (R50%). The percentage volume of total lungs receiving dose >20Gy (LungV20Gy) were 4.03±2.26%, 3.86±2.22% and 3.85±2.21% for XVMC, AXB and AAA algorithms. Examination of dose volume histograms (DVH) revealed small differences in targets and OARs for most patients. However, the AAA algorithm was found to predict considerable higher PTV coverage compared with AXB and XVMC algorithms in two cases. The dose difference was found to be primarily located at the periphery region of the target. Conclusion: For clinical SBRT lung treatment planning, the dosimetric differences between three commercially available algorithms are generally small except at target periphery. XVMC
Zifodya, Jackson M; Challens, Cameron H C; Hsieh, Wen-Long
2016-06-01
When implementing Acuros XB (AXB) as a substitute for anisotropic analytic algorithm (AAA) in the Eclipse Treatment Planning System, one is faced with a dilemma of reporting either dose to medium, AXB-Dm or dose to water, AXB-Dw. To assist with decision making on selecting either AXB-Dm or AXB-Dw for dose reporting, a retrospective study of treated patients for head & neck (H&N), prostate, breast and lung is presented. Ten patients, previously treated using AAA plans, were selected for each site and re-planned with AXB-Dm and AXB-Dw. Re-planning was done with fixed monitor units (MU) as well as non-fixed MUs. Dose volume histograms (DVH) of targets and organs at risk (OAR), were analyzed in conjunction with ICRU-83 recommended dose reporting metrics. Additionally, comparisons of plan homogeneity indices (HI) and MUs were done to further highlight the differences between the algorithms. Results showed that, on average AAA overestimated dose to the target volume and OARs by less than 2.0 %. Comparisons between AXB-Dw and AXB-Dm, for all sites, also showed overall dose differences to be small (<1.5 %). However, in non-water biological media, dose differences between AXB-Dw and AXB-Dm, as large as 4.6 % were observed. AXB-Dw also tended to have unexpectedly high 3D maximum dose values (>135 % of prescription dose) for target volumes with high density materials. Homogeneity indices showed that AAA planning and optimization templates would need to be adjusted only for the H&N and Lung sites. MU comparison showed insignificant differences between AXB-Dw relative to AAA and between AXB-Dw relative to AXB-Dm. However AXB-Dm MUs relative to AAA, showed an average difference of about 1.3 % signifying an underdosage by AAA. In conclusion, when dose is reported as AXB-Dw, the effect that high density structures in the PTV has on the dose distribution should be carefully considered. As the results show overall small dose differences between the algorithms, when
Dosimetric comparison of Acuros XB, AAA, and XVMC in stereotactic body radiotherapy for lung cancer
Tsuruta, Yusuke; Nakata, Manabu; Higashimura, Kyoji; Nakamura, Mitsuhiro Matsuo, Yukinori; Monzen, Hajime; Mizowaki, Takashi; Hiraoka, Masahiro
2014-08-15
Purpose: To compare the dosimetric performance of Acuros XB (AXB), anisotropic analytical algorithm (AAA), and x-ray voxel Monte Carlo (XVMC) in heterogeneous phantoms and lung stereotactic body radiotherapy (SBRT) plans. Methods: Water- and lung-equivalent phantoms were combined to evaluate the percentage depth dose and dose profile. The radiation treatment machine Novalis (BrainLab AG, Feldkirchen, Germany) with an x-ray beam energy of 6 MV was used to calculate the doses in the composite phantom at a source-to-surface distance of 100 cm with a gantry angle of 0°. Subsequently, the clinical lung SBRT plans for the 26 consecutive patients were transferred from the iPlan (ver. 4.1; BrainLab AG) to the Eclipse treatment planning systems (ver. 11.0.3; Varian Medical Systems, Palo Alto, CA). The doses were then recalculated with AXB and AAA while maintaining the XVMC-calculated monitor units and beam arrangement. Then the dose-volumetric data obtained using the three different radiation dose calculation algorithms were compared. Results: The results from AXB and XVMC agreed with measurements within ±3.0% for the lung-equivalent phantom with a 6 × 6 cm{sup 2} field size, whereas AAA values were higher than measurements in the heterogeneous zone and near the boundary, with the greatest difference being 4.1%. AXB and XVMC agreed well with measurements in terms of the profile shape at the boundary of the heterogeneous zone. For the lung SBRT plans, AXB yielded lower values than XVMC in terms of the maximum doses of ITV and PTV; however, the differences were within ±3.0%. In addition to the dose-volumetric data, the dose distribution analysis showed that AXB yielded dose distribution calculations that were closer to those with XVMC than did AAA. Means ± standard deviation of the computation time was 221.6 ± 53.1 s (range, 124–358 s), 66.1 ± 16.0 s (range, 42–94 s), and 6.7 ± 1.1 s (range, 5–9 s) for XVMC, AXB, and AAA, respectively. Conclusions: In the
Lopez, P; Tambasco, M; LaFontaine, R; Burns, L
2014-06-01
Purpose: To compare the dosimetric accuracy of the Eclipse 11.0 Acuros XB and Anisotropic Analytical Algorithm (AAA), Pinnacle-3 9.2 Collapsed Cone Convolution, and the iPlan 4.1 Monte Carlo (MC) and Pencil Beam (PB) algorithms using measurement as the gold standard. Methods: Ion chamber and diode measurements were taken for 6, 10, and 18 MV beams in a phantom made up of slab densities corresponding to solid water, lung, and bone. The phantom was setup at source-to-surface distance of 100 cm, and the field sizes were 3.0 × 3.0, 5.0 × 5.0, and 10.0 × 10.0 cm2. Data from the planning systems were computed along the central axis of the beam. The measurements were taken using a pinpoint chamber and edge diode for interface regions. Results: The best agreement between data from the algorithms and our measurements occurs away from the slab interfaces. For the 6 MV beam, iPlan 4.1 MC software performs the best with 1.7% absolute average percent difference from measurement. For the 10 MV beam, iPlan 4.1 PB performs the best with 2.7% absolute average percent difference from measurement. For the 18 MV beam, Acuros performs the best with 2.0% absolute average percent difference from measurement. It is interesting to note that the steepest drop in dose occurred the at lung heterogeneity-solid water interface of the18 MV, 3.0 × 3.0 cm2 field size setup. In this situation, Acuros and AAA performed best with an average percent difference within −1.1% of measurement, followed by iPlan 4.1 MC, which was within 4.9%. Conclusion: This study shows that all of the algorithms perform reasonably well in computing dose in a heterogeneous slab phantom. Moreover, Acuros and AAA perform particularly well at the lung-solid water interfaces for higher energy beams and small field sizes.
Kan, Monica W. K.; Leung, Lucullus H. T.; So, Ronald W. K.; Yu, Peter K. N.
2013-03-15
Purpose: To compare the doses calculated by the Acuros XB (AXB) algorithm and analytical anisotropic algorithm (AAA) with experimentally measured data adjacent to and within heterogeneous medium using intensity modulated radiation therapy (IMRT) and RapidArc{sup Registered-Sign} (RA) volumetric arc therapy plans for nasopharygeal carcinoma (NPC). Methods: Two-dimensional dose distribution immediately adjacent to both air and bone inserts of a rectangular tissue equivalent phantom irradiated using IMRT and RA plans for NPC cases were measured with GafChromic{sup Registered-Sign} EBT3 films. Doses near and within the nasopharygeal (NP) region of an anthropomorphic phantom containing heterogeneous medium were also measured with thermoluminescent dosimeters (TLD) and EBT3 films. The measured data were then compared with the data calculated by AAA and AXB. For AXB, dose calculations were performed using both dose-to-medium (AXB{sub Dm}) and dose-to-water (AXB{sub Dw}) options. Furthermore, target dose differences between AAA and AXB were analyzed for the corresponding real patients. The comparison of real patient plans was performed by stratifying the targets into components of different densities, including tissue, bone, and air. Results: For the verification of planar dose distribution adjacent to air and bone using the rectangular phantom, the percentages of pixels that passed the gamma analysis with the {+-} 3%/3mm criteria were 98.7%, 99.5%, and 97.7% on the axial plane for AAA, AXB{sub Dm}, and AXB{sub Dw}, respectively, averaged over all IMRT and RA plans, while they were 97.6%, 98.2%, and 97.7%, respectively, on the coronal plane. For the verification of planar dose distribution within the NP region of the anthropomorphic phantom, the percentages of pixels that passed the gamma analysis with the {+-} 3%/3mm criteria were 95.1%, 91.3%, and 99.0% for AAA, AXB{sub Dm}, and AXB{sub Dw}, respectively, averaged over all IMRT and RA plans. Within the NP region where
Soh, R; Lee, J; Harianto, F
2014-06-01
Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute material for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.
Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas
2011-01-01
Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10
During the 1960s, XB-70 was the world's largest experimental aircraft. Capable of flight at speeds of three times the speed of sound (2,000 miles per hour) at altitudes of 70,000 feet, the XB-70 wa...
During the 1960s, XB-70 was the world's largest experimental aircraft. Capable of flight at speeds of three times the speed of sound (2,000 miles per hour) at altitudes of 70,000 feet, the XB-70 wa...
Going the distance: validation of Acuros and AAA at an extended SSD of 400 cm.
Lamichhane, Narottam; Patel, Vivek N; Studenski, Matthew T
2016-01-01
Accurate dose calculation and treatment delivery is essential for total body irradia-tion (TBI). In an effort to verify the accuracy of TBI dose calculation at our institu-tion, we evaluated both the Varian Eclipse AAA and Acuros algorithms to predict dose distributions at an extended source-to-surface distance (SSD) of 400 cm. Measurements were compared to calculated values for a 6 MV beam in physical and virtual phantoms at 400 cm SSD using open beams for both 5 × 5 and 40 × 40cm2 field sizes. Inline and crossline profiles were acquired at equivalent depths of 5 cm, 10 cm, and 20 cm. Depth-dose curves were acquired using EBT2 film and an ion chamber for both field sizes. Finally, a RANDO phantom was used to simulate an actual TBI treatment. At this extended SSD, care must be taken using the planning system as there is good relative agreement between measured and calculated profiles for both algorithms, but there are deviations in terms of the absolute dose. Acuros has better agreement than AAA in the penumbra region. PMID:27074473
Manning, Siobhan; Nyathi, Thulani
2014-09-01
The aim of this study was to evaluate the accuracy of the new Acuros(TM) BV algorithm using well characterized LiF:Mg,Ti TLD 100 in heterogeneous phantoms. TLDs were calibrated using an (192)Ir source and the AAPM TG-43 calculated dose. The Tölli and Johansson Large Cavity principle and Modified Bragg Gray principle methods confirm the dose calculated by TG-43 at a distance of 5 cm from the source to within 4 %. These calibrated TLDs were used to measure the dose in heterogeneous phantoms containing air, stainless steel, bone and titanium. The TLD results were compared with the AAPM TG-43 calculated dose and the Acuros calculated dose. Previous studies by other authors have shown a change in TLD response with depth when irradiated with an (192)Ir source. This TLD depth dependence was assessed by performing measurements at different depths in a water phantom with an (192)Ir source. The variation in the TLD response with depth in a water phantom was not found to be statistically significant for the distances investigated. The TLDs agreed with Acuros(TM) BV within 1.4 % in the air phantom, 3.2 % in the stainless steel phantom, 3 % in the bone phantom and 5.1 % in the titanium phantom. The TLDs showed a larger discrepancy when compared to TG-43 with a maximum deviation of 9.3 % in the air phantom, -11.1 % in the stainless steel phantom, -14.6 % in the bone phantom and -24.6 % in the titanium phantom. The results have shown that Acuros accounts for the heterogeneities investigated with a maximum deviation of -5.1 %. The uncertainty associated with the TLDs calibrated in the PMMA phantom is ±8.2 % (2SD). PMID:24866931
Hunting for the Xb via radiative decays
NASA Astrophysics Data System (ADS)
Li, Gang; Wang, Wei
2014-06-01
In this paper, we study radiative decays of Xb, the counterpart of the famous X (3872) in the bottomonium-sector as a candidate for meson-meson molecule, into the γϒ (nS) (n = 1 , 2 , 3). Since it is likely that the Xb is below the BBbar* threshold and the mass difference between the neutral and charged bottom meson is small compared to the binding energy of the Xb, the isospin violating decay mode Xb → ϒ (nS)π+π- would be greatly suppressed. This will promote the importance of the radiative decays. We use the effective Lagrangian based on the heavy quark symmetry to explore the rescattering mechanism and calculate the partial widths. Our results show that the partial widths into γϒ (nS) are about 1 keV, and thus the branching fractions may be sizeable, considering the fact the total width may also be smaller than a few MeV like the X (3872). These radiative decay modes are of great importance in the experimental search for the Xb particularly at hadron collider. An observation of the Xb will provide a deeper insight into the exotic hadron spectroscopy and is helpful to unravel the nature of the states connected by the heavy quark symmetry.
NASA Astrophysics Data System (ADS)
Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.
2016-03-01
Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.
Evaluation of six TPS algorithms in computing entrance and exit doses.
Tan, Yun I; Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun; Elliott, Alex
2014-01-01
Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%-3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PMID:24892349
XB-70A during startup and ramp taxi
NASA Technical Reports Server (NTRS)
1968-01-01
The XB-70 was the world's largest experimental aircraft. Capable of flight at speeds of three times the speed of sound (2,000 miles per hour) at altitudes of 70,000 feet, the XB-70 was used to collect in-flight information for use in the design of future supersonic aircraft, military and civilian. This 35-second video shows the startup of the XB-70A airplane engines, the beginning of its taxi to the runway, and a turn on the ramp that shows the unique configuration of this aircraft.
Alagar, Ananda Giri Babu; Kadirampatti Mani, Ganesh; Karunakaran, Kaviarasu
2016-01-01
Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials. PMID:26894345
Hunting for the Xb via hidden bottomonium decays
NASA Astrophysics Data System (ADS)
Li, Gang; Zhou, Zhu
2015-02-01
In this work, we study the isospin conserved hidden bottomonium decay of Xb→ϒ (1 S )ω , where Xb is taken to be the counterpart of the famous X (3872 ) in the bottomonium sector as a candidate for the meson-meson molecule. Since it is likely that the Xb is below the B B¯* threshold and the mass difference between the neutral and charged bottom meson is small compared to the binding energy of the Xb, the isospin violating decay mode Xb→ϒ (n S )π+π- would be greatly suppressed. We use the effective Lagrangian based on the heavy quark symmetry to explore the rescattering mechanism of Xb→ϒ (1 S )ω and calculate the partial widths. Our results show that the partial width for the Xb→ϒ (1 S )ω is about tens of keVs. Taking into account the fact that the total width of Xb may be smaller than a few MeV like X (3872 ), the calculated branching ratios may reach to orders of 10-2. These hidden bottomonium decay modes are of great importance in the experimental search for the Xb particularly at the hadron collider. Also, the associated studies of hidden bottomonium decays Xb→ϒ (n S )γ , ϒ (n S )ω , and B B ¯γ may help us investigate the structure of Xb deeply. The experimental observation of Xb will provide us with further insight into the spectroscopy of exotic states and is helpful to probe the structure of the states connected by the heavy quark symmetry.
Truncated form of tenascin-X, XB-S, interacts with mitotic motor kinesin Eg5.
Endo, Toshiya; Ariga, Hiroyoshi; Matsumoto, Ken-ichi
2009-01-01
XB-S is a protein with an amino-terminal-truncated form of tenascin-X (TNXB). However, the precise roles of XB-S in vivo are unknown. In this study, to determine the role of XB-S in vivo, we screened XB-S-binding proteins. FLAG-tagged XB-S was transiently introduced into 293T cells. Then its associated proteins were purified by immunoprecipitation using an anti-FLAG antibody and its components were identified by mass spectrometric analyses. Mitotic motor kinesin Eg5 was identified in the immunoprecipitates. XB-S and Eg5 proteins were co-localized in the cytoplasm in interphase and mitosis, but XB-S did not localize on mitotic spindle microtubules, on which Eg5 prominently localized in mitosis. As for Eg5 binding to XB-S, glutathione S-transferase-fused XB-S expressed in vitro directly bound to full-length Eg5 translated in reticulocyte lysate, and the XB-S-binding region was located in the motor domain of Eg5. Furthermore, during cell cycle progression XB-S showed a similar expression profile to that of Eg5. These results suggest possible involvement of XB-S in the function of Eg5. PMID:18679583
77 FR 70147 - Fish and Wildlife Service 0648-XB088
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-23
... Notice of Availability (NOA) in the Federal Register on April 12, 2010 (75 FR 18482). The official... (75 FR 41157) extending the public comment period an additional 45 days to August 30, 2010. During the... National Oceanic and Atmospheric Administration Fish and Wildlife Service 0648-XB088 Environmental...
Width of the exotic Xb(5568 ) state through its strong decay to Bs0π+
NASA Astrophysics Data System (ADS)
Agaev, S. S.; Azizi, K.; Sundu, H.
2016-06-01
The width of the newly observed exotic state Xb(5568 ) is calculated via its dominant strong decay to Bs0π+ using the QCD sum rule method on the light cone in conjunction with the soft-meson approximation. To this end, the vertex XbBsπ is studied and the strong coupling gXbBsπ is computed employing for Xb(5568 ) state the interpolating diquark-antidiquark current of the [s u ][b ¯d ¯] type. The obtained prediction for the decay width of Xb(5568 ) is confronted and a nice agreement found with the experimental data of the D0 Collaboration.
Chung, J; Kim, J; Lee, J; Kim, Y
2014-06-01
Purpose: The present study aimed to investigate the dosimetric impacts of the anisotropic analytic algorithm (AAA) and the Acuros XB (AXB) plan for lung stereotactic ablative radiation therapy using flattening filter-free (FFF) beam. We retrospectively analyzed 10 patients. Methods: We retrospectively analyzed 10 patients. The dosimetric parameters for the target and organs at risk (OARs) from the treatment plans calculated with these dose calculation algorithms were compared. The technical parameters, such as the computation times and the total monitor units (MUs), were also evaluated. Results: A comparison of DVHs from AXB and AAA showed that the AXB plan produced a high maximum PTV dose by average 4.40% with a statistical significance but slightly lower mean PTV dose by average 5.20% compared to the AAA plans. The maximum dose to the lung was slightly higher in the AXB compared to the AAA. For both algorithms, the values of V5, V10 and V20 for ipsilateral lung were higher in the AXB plan more than those of AAA. However, these parameters for contralateral lung were comparable. The differences of maximum dose for the spinal cord and heart were also small. The computation time of AXB was found fast with the relative difference of 13.7% than those of AAA. The average of monitor units (MUs) for all patients was higher in AXB plans than in the AAA plans. These results indicated that the difference between AXB and AAA are large in heterogeneous region with low density. Conclusion: The AXB provided the advantages such as the accuracy of calculations and the reduction of the computation time in lung stereotactic ablative radiotherapy (SABR) with using FFF beam, especially for VMAT planning. In dose calculation with the media of different density, therefore, the careful attention should be taken regarding the impacts of different heterogeneity correction algorithms. The authors report no conflicts of interest.
A summary of XB-70 sonic boom signature data
NASA Astrophysics Data System (ADS)
Maglieri, Domenic J.; Sothcott, Victor E.; Keefer, Thomas N., Jr.
1992-04-01
A compilation is provided of measured sonic boom signature data derived from 39 supersonic flights (43 passes) of the XB-70 airplane over the Mach number range of 1.11 to 2.92 and an altitude range of 30500 to 70300 ft. These tables represent a convenient hard copy version of available electronic files which include over 300 digitized sonic boom signatures with their corresponding spectra. Also included in the electronic files is information regarding ground track position, aircraft operating conditions, and surface and upper air weather observations for each of the 43 supersonic passes. In addition to the sonic boom signature data, a description is also provided of the XB-70 data base that was placed on electronic files along with a description of the method used to scan and digitize the analog/oscillograph sonic boom signature time histories. Such information is intended to enhance the value and utilization of the electronic files.
A summary of XB-70 sonic boom signature data
NASA Technical Reports Server (NTRS)
Maglieri, Domenic J.; Sothcott, Victor E.; Keefer, Thomas N., Jr.
1992-01-01
A compilation is provided of measured sonic boom signature data derived from 39 supersonic flights (43 passes) of the XB-70 airplane over the Mach number range of 1.11 to 2.92 and an altitude range of 30500 to 70300 ft. These tables represent a convenient hard copy version of available electronic files which include over 300 digitized sonic boom signatures with their corresponding spectra. Also included in the electronic files is information regarding ground track position, aircraft operating conditions, and surface and upper air weather observations for each of the 43 supersonic passes. In addition to the sonic boom signature data, a description is also provided of the XB-70 data base that was placed on electronic files along with a description of the method used to scan and digitize the analog/oscillograph sonic boom signature time histories. Such information is intended to enhance the value and utilization of the electronic files.
Application of the QCD light cone sum rule to tetraquarks: The strong vertices XbXbρ and XcXcρ
NASA Astrophysics Data System (ADS)
Agaev, S. S.; Azizi, K.; Sundu, H.
2016-06-01
The full version of the QCD light-cone sum rule method is applied to tetraquarks containing a single heavy b or c quark. To this end, investigations of the strong vertices XbXbρ and XcXcρ are performed, where Xb=[s u ][b ¯ d ¯ ] and Xc=[s u ][c ¯d ¯] are the exotic states built of four quarks of different flavors. The strong coupling constants GXbXbρ and GXcXcρ corresponding to these vertices are found using the ρ -meson leading- and higher-twist distribution amplitudes. In the calculations, Xb and Xc are treated as scalar bound states of a diquark and antidiquark.
XB130 expression in human osteosarcoma: a clinical and experimental study.
Wang, Xiaohui; Wang, Ruiguo; Liu, Zhaolong; Hao, Fengyun; Huang, Hai; Guo, Wenchen
2015-01-01
Identifying prognostic factors for osteosarcoma (OS) aids in the selection of patients who require more aggressive management. XB130 is a newly characterized adaptor protein that was reported to be a prognostic factor of certain tumor types. However, the association between XB130 expression and the prognosis of OS remains unknown. In the present study, we investigated the association between XB130 expression and clinicopathologic features and prognosis in patients suffering OS, and further investigated its potential role on OS cells in vitro and vivo. A retrospective immunohistochemical study of XB130 was performed on archival formalin-fixed paraffin-embedded specimens from 60 pairs of osteosarcoma and noncancerous bone tissues, and compared the expression of XB130 with clinicopathological parameters. We then investigate the effect of XB130 sliencing on invasion in vitro and lung metastasis in vivo of the human OS cell line. Immunohistochemical assays revealed that XB130 expression in OS tissues was significantly higher than that in corresponding noncancerous bone tissues (P=0.001). In addition, high XB130 expression more frequently occurred in OS tissues with advanced clinical stage (P=0.002) and positive distant metastasis (P=0.001). Moreover, OS patients with high XB130 expression had significantly shorter overall survival and disease-free survival (both P<0.001) when compared with patients with the low expression of XB130. The univariate analysis and multivariate analysis shown that high XB130 expression and distant metastasis were the independent poor prognostic factor.We showed that XB130 depletion by RNA interference inhibited invasion of XB130-rich U2OS cells in vitro and lung metastasis in vivo. This is the first study to reveal that XB130 overexpression may be related to the prediction of metastasis potency and poor prognosis for OS patients, suggesting that XB130 may serve as a prognostic marker for the optimization of clinical treatments. Furthermore
XB130 expression in human osteosarcoma: a clinical and experimental study
Wang, Xiaohui; Wang, Ruiguo; Liu, Zhaolong; Hao, Fengyun; Huang, Hai; Guo, Wenchen
2015-01-01
Identifying prognostic factors for osteosarcoma (OS) aids in the selection of patients who require more aggressive management. XB130 is a newly characterized adaptor protein that was reported to be a prognostic factor of certain tumor types. However, the association between XB130 expression and the prognosis of OS remains unknown. In the present study, we investigated the association between XB130 expression and clinicopathologic features and prognosis in patients suffering OS, and further investigated its potential role on OS cells in vitro and vivo. A retrospective immunohistochemical study of XB130 was performed on archival formalin-fixed paraffin-embedded specimens from 60 pairs of osteosarcoma and noncancerous bone tissues, and compared the expression of XB130 with clinicopathological parameters. We then investigate the effect of XB130 sliencing on invasion in vitro and lung metastasis in vivo of the human OS cell line. Immunohistochemical assays revealed that XB130 expression in OS tissues was significantly higher than that in corresponding noncancerous bone tissues (P = 0.001). In addition, high XB130 expression more frequently occurred in OS tissues with advanced clinical stage (P = 0.002) and positive distant metastasis (P = 0.001). Moreover, OS patients with high XB130 expression had significantly shorter overall survival and disease-free survival (both P < 0.001) when compared with patients with the low expression of XB130. The univariate analysis and multivariate analysis shown that high XB130 expression and distant metastasis were the independent poor prognostic factor.We showed that XB130 depletion by RNA interference inhibited invasion of XB130-rich U2OS cells in vitro and lung metastasis in vivo. This is the first study to reveal that XB130 overexpression may be related to the prediction of metastasis potency and poor prognosis for OS patients, suggesting that XB130 may serve as a prognostic marker for the optimization of clinical treatments
Development of Outboard Nacelle for the XB-36 Airplane
NASA Technical Reports Server (NTRS)
Nuber, Robert J.
1947-01-01
An investigation of two 1/14 scale model configurations of an outboard nacelle for the XB-36 airplane was made in the Langley two-dimensional low-turbulence tunnels over a range of airplane lift coefficients (C (sub L) = 0.409 to C(sub L) = 0.943) for three representative flow conditions. The purpose of the investigation was to develop a low-drag wing-nacelle pusher combination which incorporated an internal air-flow system. The present investigation has led to the development of a nacelle which had external drag coefficients of similar order of magnitude to those obtained previously from tests of an inboard nacelle configuration at the corresponding operating lift coefficients and from approximately one-third to one-half of those of conventional tractor designs having the same ratio of wing thickness to nacelle diameter.
Kinetic simulations of X-B and O-X-B mode conversion
Arefiev, A. V.; Du Toit, E. J.; Vann, R. G. L.; Köhn, A.; Holzhauer, E.; Shevchenko, V. F.
2015-12-10
We have performed fully-kinetic simulations of X-B and O-X-B mode conversion in one and two dimensional setups using the PIC code EPOCH. We have recovered the linear dispersion relation for electron Bernstein waves by employing relatively low amplitude incoming waves. The setups presented here can be used to study non-linear regimes of X-B and O-X-B mode conversion.
Development of Inboard Nacelle for the XB-36 Airplane
NASA Technical Reports Server (NTRS)
Nuber, Robert J.
1947-01-01
A series of investigations of several 1/14-scale models of an inboard nacelle for the XB-36 airplane was made in the Langley two-dimensional low-turbulence tunnels. The purpose of these investigations was to develop a low-drag wing-nacelle pusher combination which incorporated an internal air-flow system. As a result of these investigations, a nacelle was developed which had external drag coefficients considerably lower than the original basic form with the external nacelle drag approximately one-half to two-thirds of those of conventional tractor designs. The largest reductions in drag resulted from sealing the gaps between the wing flaps and nacelle, reducing the thickness of the nacelle training-edge lip, and bringing the under-wing air inlet to the wing leading edge. It was found that without the engine cooling fan adequate cooling air would be available for all conditions of flight except for cruise and climb at 40,000 feet. Sufficient oil cooling at an altitude of 40,000 feet may be obtained by the use of flap-type exit doors.
Huang, Xiaoen; Liu, Xueying; Chen, Xiuhua; Snyder, Anita; Song, Wen-Yuan
2013-01-01
Programmed cell death has been associated with plant immunity and senescence. The receptor kinase XA21 confers resistance to bacterial blight disease of rice (Oryza sativa) caused by Xanthomonas oryzae pv. oryzae (Xoo). Here we show that the XA21 binding protein 3 (XB3) is capable of inducing cell death when overexpressed in Nicotiana benthamiana. XB3 is a RING finger-containing E3 ubiquitin ligase that has been positively implicated in XA21-mediated resistance. Mutation abolishing the XB3 E3 activity also eliminates its ability to induce cell death. Phylogenetic analysis of XB3-related sequences suggests a family of proteins (XB3 family) with members from diverse plant species. We further demonstrate that members of the XB3 family from rice, Arabidopsis and citrus all trigger a similar cell death response in Nicotiana benthamiana, suggesting an evolutionarily conserved role for these proteins in regulating programmed cell death in the plant kingdom. PMID:23717500
Toba, Hiroaki; Wang, Yingchun; Bai, Xiaohui; Zamel, Ricardo; Cho, Hae-Ra; Liu, Hongmei; Lira, Alonso; Keshavjee, Shaf; Liu, Mingyao
2015-01-01
Proliferation of bronchioalveolar stem cells (BASCs) is essential for epithelial repair. XB130 is a novel adaptor protein involved in the regulation of epithelial cell survival, proliferation and migration through the PI3K/Akt pathway. To determine the role of XB130 in airway epithelial injury repair and regeneration, a naphthalene-induced airway epithelial injury model was used with XB130 knockout (KO) mice and their wild type (WT) littermates. In XB130 KO mice, at days 7 and 14, small airway epithelium repair was significantly delayed with fewer number of Club cells (previously called Clara cells). CCSP (Club cell secreted protein) mRNA expression was also significantly lower in KO mice at day 7. At day 5, there were significantly fewer proliferative epithelial cells in the KO group, and the number of BASCs significantly increased in WT mice but not in KO mice. At day 7, phosphorylation of Akt, GSK-3β, and the p85α subunit of PI3K was observed in airway epithelial cells in WT mice, but to a much lesser extent in KO mice. Microarray data also suggest that PI3K/Akt-related signals were regulated differently in KO and WT mice. An inhibitory mechanism for cell proliferation and cell cycle progression was suggested in KO mice. XB130 is involved in bronchioalveolar stem cell and Club cell proliferation, likely through the PI3K/Akt/GSK-3β pathway. PMID:26360608
XB-70A #1 liftoff with TB-58A chase aircraft
NASA Technical Reports Server (NTRS)
1960-01-01
This photo shows XB-70A #1 taking off on a research flight, escorted by a TB-58 chase plane. The TB-58 (a prototype B-58 modified as a trainer) had a dash speed of Mach 2. This allowed it to stay close to the XB-70 as it conducted its research maneuvers. When the XB-70 was flying at or near Mach 3, the slower TB-58 could often keep up with it by flying lower and cutting inside the turns in the XB-70's flight path when these occurred. The XB-70 was the world's largest experimental aircraft. It was capable of flight at speeds of three times the speed of sound (roughly 2,000 miles per hour) at altitudes of 70,000 feet. It was used to collect in-flight information for use in the design of future supersonic aircraft, military and civilian. The major objectives of the XB-70 flight research program were to study the airplane's stability and handling characteristics, to evaluate its response to atmospheric turbulence, and to determine the aerodynamic and propulsion performance. In addition there were secondary objectives to measure the noise and friction associated with airflow over the airplane and to determine the levels and extent of the engine noise during takeoff, landing, and ground operations. The XB-70 was about 186 feet long, 33 feet high, with a wingspan of 105 feet. Originally conceived as an advanced bomber for the United States Air Force, the XB-70 was limited to production of two aircraft when it was decided to limit the aircraft's mission to flight research. The first flight of the XB-70 was made on Sept. 21, 1964. The number two XB-70 was destroyed in a mid-air collision on June 8, 1966. Program management of the NASA-USAF research effort was assigned to NASA in March 1967. The final flight was flown on Feb. 4, 1969. Designed by North American Aviation (later North American Rockwell and still later, a division of Boeing) the XB-70 had a long fuselage with a canard or horizontal stabilizer mounted just behind the crew compartment. It had a sharply swept 65
Epitaxial semimetallic HfxZr1-xB2 templates for optoelectronic integration on silicon
NASA Astrophysics Data System (ADS)
Roucka, Radek; An, YuJin; Chizmeshya, Andrew V. G.; Tolle, John; Kouvetakis, John; D'Costa, Vijay R.; Menéndez, José; Crozier, Peter
2006-12-01
High quality heteroepitaxial HfxZr1-xB2 (x=0-1) buffers were grown directly on Si(111). The compositional dependence of the film structure and ab initio elastic constants were used to show that hexagonal HfxZr1-xB2 possess tensile in-plane strain (0.5%) as grown. High quality HfB2 films were also grown on strain compensating ZrB2-buffered Si(111). Initial reflectivity measurements of thick ZrB2 films agree with first principles calculations which predict that the reflectivity of HfB2 increases by 20% relative to ZrB2 in the 2-8eV range. These tunable structural, thermoelastic, and optical properties suggest that HfxZr1-xB2 templates should be suitable for broad integration of III nitrides with Si.
One-dimensional full wave simulation on XB mode conversion in electron cyclotron heating
Kim, S. H.; Lee, H. Y.; Jo, J. G.; Hwang, Y. S.
2014-06-15
The XB mode conversion in electron cyclotron resonance frequency heating has been studied in detail through 1D full wave simulation. The field pattern depends on the density scale length, and the wave absorption near upper hybrid resonance is maximized beyond the R(X) mode cutoff density for optimized density scale length. The simulated mode conversion efficiency has been compared with that of an analytic formula, showing good agreements except for the phase dependent term of the X wave. The mode conversion efficiency is calculated for oblique injections as well, and it is found that the efficiency decreases as the injection angles increases. Short magnetic field scale length is confirmed to relax the short density scale length condition maximizing the XB mode conversion efficiency. Finally, the simulation code is used to analyze the mode conversion and power absorption of a pre-ionization plasma in versatile experiment spherical torus.
Dosimetric evaluation of photon dose calculation under jaw and MLC shielding
Fogliata, A.; Clivio, A.; Vanetti, E.; Nicolini, G.; Belosi, M. F.; Cozzi, L.
2013-10-15
Purpose: The accuracy of photon dose calculation algorithms in out-of-field regions is often neglected, despite its importance for organs at risk and peripheral dose evaluation. The present work has assessed this for the anisotropic analytical algorithm (AAA) and the Acuros-XB algorithms implemented in the Eclipse treatment planning system. Specifically, the regions shielded by the jaw, or the MLC, or both MLC and jaw for flattened and unflattened beams have been studied.Methods: The accuracy in out-of-field dose under different conditions was studied for two different algorithms. Measured depth doses out of the field, for different field sizes and various distances from the beam edge were compared with the corresponding AAA and Acuros-XB calculations in water. Four volumetric modulated arc therapy plans (in the RapidArc form) were optimized in a water equivalent phantom, PTW Octavius, to obtain a region always shielded by the MLC (or MLC and jaw) during the delivery. Doses to different points located in the shielded region and in a target-like structure were measured with an ion chamber, and results were compared with the AAA and Acuros-XB calculations. Photon beams of 6 and 10 MV, flattened and unflattened were used for the tests.Results: Good agreement between calculated and measured depth doses was found using both algorithms for all points measured at depth greater than 3 cm. The mean dose differences (±1SD) were −8%± 16%, −3%± 15%, −16%± 18%, and −9%± 16% for measurements vs AAA calculations and −10%± 14%, −5%± 12%, −19%± 17%, and −13%± 14% for Acuros-XB, for 6X, 6 flattening-filter free (FFF), 10X, and 10FFF beams, respectively. The same figures for dose differences relative to the open beam central axis dose were: −0.1%± 0.3%, 0.0%± 0.4%, −0.3%± 0.3%, and −0.1%± 0.3% for AAA and −0.2%± 0.4%, −0.1%± 0.4%, −0.5%± 0.5%, and −0.3%± 0.4% for Acuros-XB. Buildup dose was overestimated with AAA, while Acuros-XB gave
Performance of the 19XB 10-Stage Axial-Flow Compressor
NASA Technical Reports Server (NTRS)
Downing, Richard M.; Finger, Harold B.
1947-01-01
The 19xB compressor, which replaces the 19B coaapreseor and has the same length and diameter 88 the 19B compressor, was designed with 10 stages to deliver 30 pounds of air per second for a pressure ratio of 4.17 at an equivalent speed of 17,000 rpm; the 19B was designed with six stages for a pressure ratio of 2.7 at the same weight flow and speed as the 19XB compressor. The performance characteristics of the new compressor were determined at the NACA Cleveland laboratory at the request of the Bureau of Aeronautics, Navy Department. Results are presented of the investigation made to evaluate the over-all performance of the compressor, the effects of possible leakage past the rotor rear air seal, the effects of inserting instruments in each row of stator blades and in the first row of outlet guide vanes, and the effects of changing the temperature and the pressure of the inlet air. The results of the interstage surveys are also presented.
Rotation vibration energy level clustering in the XB1 ground electronic state of PH2
NASA Astrophysics Data System (ADS)
Yurchenko, S. N.; Thiel, W.; Jensen, Per; Bunker, P. R.
2006-10-01
We use previously determined potential energy surfaces for the Renner-coupled XB1 and AA1 electronic states of the phosphino (PH 2) free radical in a calculation of the energies and wavefunctions of highly excited rotational and vibrational energy levels of the X˜ state. We show how spin-orbit coupling, the Renner effect, rotational excitation, and vibrational excitation affect the clustered energy level patterns that occur. We consider both 4-fold rotational energy level clustering caused by centrifugal distortion, and vibrational energy level pairing caused by local mode behaviour. We also calculate ab initio dipole moment surfaces for the X˜ and A˜ states, and the X˜-A˜ transition moment surface, in order to obtain spectral intensities.
Valence fluctuations of europium in the boride Eu4Pd(29+x)B8.
Gumeniuk, Roman; Schnelle, Walter; Ahmida, Mahmoud A; Abd-Elmeguid, Mohsen M; Kvashnina, Kristina O; Tsirlin, Alexander A; Leithe-Jasper, Andreas; Geibel, Christoph
2016-03-23
We synthesized a high-quality sample of the boride Eu4Pd(29+x)B8 (x = 0.76) and studied its structural and physical properties. Its tetragonal structure was solved by direct methods and confirmed to belong to the Eu4Pd29B8 type. All studied physical properties indicate a valence fluctuating Eu state, with a valence decreasing continuously from about 2.9 at 5 K to 2.7 at 300 K. Maxima in the T dependence of the susceptibility and thermopower at around 135 K and 120 K, respectively, indicate a valence fluctuation energy scale on the order of 300 K. Analysis of the magnetic susceptibility evidences some inconsistencies when using the ionic interconfigurational fluctuation (ICF) model, thus suggesting a stronger relevance of hybridization between 4f and valence electrons compared to standard valence-fluctuating Eu systems. PMID:26895077
Measured Sonic Boom Signatures Above and Below the XB-70 Airplane Flying at Mach 1.5 and 37,000 Feet
NASA Technical Reports Server (NTRS)
Maglieri, Domenic J.; Henderson, Herbert R.; Tinetti, Ana F.
2011-01-01
During the 1966-67 Edwards Air Force Base (EAFB) National Sonic Boom Evaluation Program, a series of in-flight flow-field measurements were made above and below the USAF XB-70 using an instrumented NASA F-104 aircraft with a specially designed nose probe. These were accomplished in the three XB-70 flights at about Mach 1.5 at about 37,000 ft. and gross weights of about 350,000 lbs. Six supersonic passes with the F-104 probe aircraft were made through the XB-70 shock flow-field; one above and five below the XB-70. Separation distances ranged from about 3000 ft. above and 7000 ft. to the side of the XB-70 and about 2000 ft. and 5000 ft. below the XB-70. Complex near-field "sawtooth-type" signatures were observed in all cases. At ground level, the XB-70 shock waves had not coalesced into the two-shock classical sonic boom N-wave signature, but contained three shocks. Included in this report is a description of the generating and probe airplanes, the in-flight and ground pressure measuring instrumentation, the flight test procedure and aircraft positioning, surface and upper air weather observations, and the six in-flight pressure signatures from the three flights.
NASA Astrophysics Data System (ADS)
Lonski, P.; Taylor, M. L.; Hackworth, W.; Phipps, A.; Franich, R. D.; Kron, T.
2014-03-01
Different treatment planning system (TPS) algorithms calculate radiation dose in different ways. This work compares measurements made in vivo to the dose calculated at out-of-field locations using three different commercially available algorithms in the Eclipse treatment planning system. LiF: Mg, Cu, P thermoluminescent dosimeter (TLD) chips were placed with 1 cm build-up at six locations on the contralateral side of 5 patients undergoing radiotherapy for breast cancer. TLD readings were compared to calculations of Pencil Beam Convolution (PBC), Anisotropic Analytical Algorithm (AAA) and Acuros XB (XB). AAA predicted zero dose at points beyond 16 cm from the field edge. In the same region PBC returned an unrealistically constant result independent of distance and XB showed good agreement to measured data although consistently underestimated by ~0.1 % of the prescription dose. At points closer to the field edge XB was the superior algorithm, exhibiting agreement with TLD results to within 15 % of measured dose. Both AAA and PBC showed mixed agreement, with overall discrepancies considerably greater than XB. While XB is certainly the preferable algorithm, it should be noted that TPS algorithms in general are not designed to calculate dose at peripheral locations and calculation results in such regions should be treated with caution.
The origin of the n-type behavior in rare earth borocarbide Y1-xB28.5C4.
Mori, Takao; Nishimura, Toshiyuki; Schnelle, Walter; Burkhardt, Ulrich; Grin, Yuri
2014-10-28
Synthesis conditions, morphology, and thermoelectric properties of Y1-xB28.5C4 were investigated. Y1-xB28.5C4 is the compound with the lowest metal content in a series of homologous rare earth borocarbonitrides, which have been attracting interest as high temperature thermoelectric materials because they can embody the long-awaited counterpart to boron carbide, one of the few thermoelectric materials with a history of commercialization. It was revealed that the presence of boron carbide inclusions was the origin of the p-type behavior previously observed for Y1-xB28.5C4 in contrast to Y1-xB15.5CN and Y1-xB22C2N. In comparison with that of previous small flux-grown single crystals, a metal-poor composition of YB40C6 (Y0.71B28.5C4) in the synthesis successfully yielded sintered bulk Y1-xB28.5C4 samples apparently free of boron carbide inclusions. "Pure" Y1-xB28.5C4 was found to exhibit the same attractive n-type behavior as the other rare earth borocarbonitrides even though it is the most metal-poor compound among the series. Calculations of the electronic structure were carried out for Y1-xB28.5C4 as a representative of the series of homologous compounds and reveal a pseudo gap-like electronic density of states near the Fermi level mainly originating from the covalent borocarbonitride network. PMID:25091113
Signature of the presence of a third body orbiting around XB 1916-053
NASA Astrophysics Data System (ADS)
Iaria, R.; Di Salvo, T.; Gambino, A. F.; Del Santo, M.; Romano, P.; Matranga, M.; Galiano, C. G.; Scarano, F.; Riggio, A.; Sanna, A.; Pintore, F.; Burderi, L.
2015-10-01
Context. The ultra-compact dipping source XB 1916-053 has an orbital period of close to 50 min and a companion star with a very low mass (less than 0.1 M⊙). The orbital period derivative of the source was estimated to be 1.5(3) × 10-11 s/s through analysing the delays associated with the dip arrival times obtained from observations spanning 25 years, from 1978 to 2002. Aims: The known orbital period derivative is extremely large and can be explained by invoking an extreme, non-conservative mass transfer rate that is not easily justifiable. We extended the analysed data from 1978 to 2014, by spanning 37 years, to verify whether a larger sample of data can be fitted with a quadratic term or a different scenario has to be considered. Methods: We obtained 27 delays associated with the dip arrival times from data covering 37 years and used different models to fit the time delays with respect to a constant period model. Results: We find that the quadratic form alone does not fit the data. The data are well fitted using a sinusoidal term plus a quadratic function or, alternatively, with a series of sinusoidal terms that can be associated with a modulation of the dip arrival times due to the presence of a third body that has an elliptical orbit. We infer that for a conservative mass transfer scenario the modulation of the delays can be explained by invoking the presence of a third body with mass between 0.10-0.14 M⊙, orbital period around the X-ray binary system of close to 51 yr and an eccentricity of 0.28 ± 0.15. In a non-conservative mass transfer scenario we estimate that the fraction of matter yielded by the degenerate companion star and accreted onto the neutron star is β = 0.08, the neutron star mass is ≥2.2 M⊙, and the companion star mass is 0.028 M⊙. In this case, we explain the sinusoidal modulation of the delays by invoking the presence of a third body with orbital period of 26 yr and mass of 0.055 M⊙. Conclusions: From the analysis of the delays
Simulated Altitude Performance of Combustor of Westinghouse 19XB-1 Jet-Propulsion Engine
NASA Technical Reports Server (NTRS)
Childs, J. Howard; McCafferty, Richard J.
1948-01-01
A 19XB-1 combustor was operated under conditions simulating zero-ram operation of the 19XB-1 turbojet engine at various altitudes and engine speeds. The combustion efficiencies and the altitude operational limits were determined; data were also obtained on the character of the combustion, the pressure drop through the combustor, and the combustor-outlet temperature and velocity profiles. At altitudes about 10,000 feet below the operational limits, the flames were yellow and steady and the temperature rise through the combustor increased with fuel-air ratio throughout the range of fuel-air ratios investigated. At altitudes near the operational limits, the flames were blue and flickering and the combustor was sluggish in its response to changes in fuel flow. At these high altitudes, the temperature rise through the combustor increased very slowly as the fuel flow was increased and attained a maximum at a fuel-air ratio much leaner than the over-all stoichiometric; further increases in fuel flow resulted in decreased values of combustor temperature rise and increased resonance until a rich-limit blow-out occurred. The approximate operational ceiling of the engine as determined by the combustor, using AN-F-28, Amendment-3, fuel, was 30,400 feet at a simulated engine speed of 7500 rpm and increased as the engine speed was increased. At an engine speed of 16,000 rpm, the operational ceiling was approximately 48,000 feet. Throughout the range of simulated altitudes and engine speeds investigated, the combustion efficiency increased with increasing engine speed and with decreasing altitude. The combustion efficiency varied from over 99 percent at operating conditions simulating high engine speed and low altitude operation to less than 50 percent at conditions simulating operation at altitudes near the operational limits. The isothermal total pressure drop through the combustor was 1.82 times as great as the inlet dynamic pressure. As expected from theoretical
Kato, Akari; Endo, Toshiya; Abiko, Shun; Ariga, Hiroyoshi; Matsumoto, Ken-ichi
2008-08-15
ABSTRACT: XB-S is an amino-terminal truncated protein of tenascin-X (TNX) in humans. The levels of the XB-S transcript, but not those of TNX transcripts, were increased upon hypoxia. We identified a critical hypoxia-responsive element (HRE) localized to a GT-rich element positioned from - 1410 to - 1368 in the XB-S promoter. Using an electrophoretic mobility shift assay (EMSA), we found that the HRE forms a DNA-protein complex with Sp1 and that GG positioned in - 1379 and - 1378 is essential for the binding of the nuclear complex. Transfection experiments in SL2 cells, an Sp1-deficient model system, with an Sp1 expression vector demonstrated that the region from - 1380 to - 1371, an HRE, is sufficient for efficient activation of the XB-S promoter upon hypoxia. The EMSA and a chromatin immunoprecipitation (ChIP) assay showed that Sp1 together with the transcriptional repressor histone deacetylase 1 (HDAC1) binds to the HRE of the XB-S promoter under normoxia and that hypoxia causes dissociation of HDAC1 from the Sp1/HDAC1 complex. The HRE promoter activity was induced in the presence of a histone deacetylase inhibitor, trichostatin A, even under normoxia. Our results indicate that the hypoxia-induced activation of the XB-S promoter is regulated through dissociation of HDAC1 from an Sp1-binding HRE site.
Dip Spectroscopy of the Low Mass X-Ray Binary XB 1254-690
NASA Technical Reports Server (NTRS)
Smale, Alan P.; Church, M. J.; BalucinskaChurch, M.; White, Nicholas E. (Technical Monitor)
2002-01-01
We observed the low mass X-ray binary XB 1254-690 with the Rossi X-ray Timing Explorer in 2001 May and December. During the first observation strong dipping on the 3.9-hr orbital period and a high degree of variability were observed, along with "shoulders" approx. 15% deep during extended intervals on each side of the main dips. The first observation also included pronounced flaring activity. The non-dip spectrum obtained using the PCA instrument was well-described by a two-component model consisting of a blackbody with kT = 1.30 +/- 0.10 keV plus a cut-off power law representation of Comptonized emission with power law photon index 1.10 +/- 0.46 and a cut-off energy of 5.9(sup +3.0, sub -1.4) keV. The intensity decrease in the shoulders of dipping is energy-independent, consistent with electron scattering in the outer ionized regions of the absorber. In deep dipping the depth of dipping reached 100%, in the energy band below 5 keV, indicating that all emitting regions were covered by absorber. Intensity-selected dip spectra were well-fit by a model in which the point-like blackbody is rapidly covered, while the extended Comptonized emission is progressively overlapped by the absorber, with the, covering fraction rising to 95% in the deepest portion of the dip. The intensity of this component in the dip spectra could be modeled by a combination of electron scattering and photoelectric absorption. Dipping did not occur during the 2001 December observation, but remarkably, both bursting and flaring were observed contemporaneously.
Hyperactivation of the Human Plasma Membrane Ca2+ Pump PMCA h4xb by Mutation of Glu99 to Lys*
Mazzitelli, Luciana R.; Adamo, Hugo P.
2014-01-01
The transport of calcium to the extracellular space carried out by plasma membrane Ca2+ pumps (PMCAs) is essential for maintaining low Ca2+ concentrations in the cytosol of eukaryotic cells. The activity of PMCAs is controlled by autoinhibition. Autoinhibition is relieved by the binding of Ca2+-calmodulin to the calmodulin-binding autoinhibitory sequence, which in the human PMCA is located in the C-terminal segment and results in a PMCA of high maximal velocity of transport and high affinity for Ca2+. Autoinhibition involves the intramolecular interaction between the autoinhibitory domain and a not well defined region of the molecule near the catalytic site. Here we show that the fusion of GFP to the C terminus of the h4xb PMCA causes partial loss of autoinhibition by specifically increasing the Vmax. Mutation of residue Glu99 to Lys in the cytosolic portion of the M1 transmembrane helix at the other end of the molecule brought the Vmax of the h4xb PMCA to near that of the calmodulin-activated enzyme without increasing the apparent affinity for Ca2+. Altogether, the results suggest that the autoinhibitory interaction of the extreme C-terminal segment of the h4 PMCA is disturbed by changes of negatively charged residues of the N-terminal region. This would be consistent with a recently proposed model of an autoinhibited form of the plant ACA8 pump, although some differences are noted. PMID:24584935
Hyperactivation of the human plasma membrane Ca2+ pump PMCA h4xb by mutation of Glu99 to Lys.
Mazzitelli, Luciana R; Adamo, Hugo P
2014-04-11
The transport of calcium to the extracellular space carried out by plasma membrane Ca(2+) pumps (PMCAs) is essential for maintaining low Ca(2+) concentrations in the cytosol of eukaryotic cells. The activity of PMCAs is controlled by autoinhibition. Autoinhibition is relieved by the binding of Ca(2+)-calmodulin to the calmodulin-binding autoinhibitory sequence, which in the human PMCA is located in the C-terminal segment and results in a PMCA of high maximal velocity of transport and high affinity for Ca(2+). Autoinhibition involves the intramolecular interaction between the autoinhibitory domain and a not well defined region of the molecule near the catalytic site. Here we show that the fusion of GFP to the C terminus of the h4xb PMCA causes partial loss of autoinhibition by specifically increasing the Vmax. Mutation of residue Glu(99) to Lys in the cytosolic portion of the M1 transmembrane helix at the other end of the molecule brought the Vmax of the h4xb PMCA to near that of the calmodulin-activated enzyme without increasing the apparent affinity for Ca(2+). Altogether, the results suggest that the autoinhibitory interaction of the extreme C-terminal segment of the h4 PMCA is disturbed by changes of negatively charged residues of the N-terminal region. This would be consistent with a recently proposed model of an autoinhibited form of the plant ACA8 pump, although some differences are noted. PMID:24584935
Direct X-B mode conversion for high-β national spherical torus experiment in nonlinear regime
Ali Asgarian, M. E-mail: maa@msu.edu; Parvazian, A.; Abbasi, M.; Verboncoeur, J. P.
2014-09-15
Electron Bernstein wave (EBW) can be effective for heating and driving currents in spherical tokamak plasmas. Power can be coupled to EBW via mode conversion of the extraordinary (X) mode wave. The most common and successful approach to study the conditions for optimized mode conversion to EBW was evaluated analytically and numerically using a cold plasma model and an approximate kinetic model. The major drawback in using radio frequency waves was the lack of continuous wave sources at very high frequencies (above the electron plasma frequency), which has been addressed. A future milestone is to approach high power regime, where the nonlinear effects become significant, exceeding the limits of validity for present linear theory. Therefore, one appropriate tool would be particle in cell (PIC) simulation. The PIC method retains most of the nonlinear physics without approximations. In this work, we study the direct X-B mode conversion process stages using PIC method for incident wave frequency f{sub 0} = 15 GHz, and maximum amplitude E{sub 0} = 10{sup 5 }V/m in the national spherical torus experiment (NSTX). The modelling shows a considerable reduction in X-B mode conversion efficiency, C{sub modelling} = 0.43, due to the presence of nonlinearities. Comparison of system properties to the linear state reveals predominant nonlinear effects; EBW wavelength and group velocity in comparison with linear regime exhibit an increment around ∼36% and 17%, respectively.
On unusual temperature dependence of the upper critical field in YNi 2- xFe xB 2C
NASA Astrophysics Data System (ADS)
Kumary, T. Geetha; Kalavathi, S.; Valsakumar, M. C.; Hariharan, Y.; Radhakrishnan, T. S.
1997-02-01
Measurement of upper critica field in YNi 2- xFe xB 2C is reported for x = 0, 0.05, 0.10, and 0.15. An anomalous positive curvature is observed for a range of temperatures close to Tc, for all x. As x is increased, the temperature interval over which the curvature in Hc2( T) is positive, is reduced and the system shows a tendency to go to the usual behaviour exhibited by conventional low temperature superconductors. Most of the theories based on a Fermi liquid normal state seem to be inadequate to understand this anomalous behaviour. It is speculated that this anomalous behaviour of Hc2( T) signifies the presence of strong correlations in the pristine YNi 2B 2C and that strong correlation effects become less and less important upon substitution of Ni with Fe.
Yamanaka, Daisuke; Akama, Takeshi; Chida, Kazuhiro; Minami, Shiro; Ito, Koichi; Hakuno, Fumihiko; Takahashi, Shin-Ichiro
2016-01-01
Actin-crosslinking proteins control actin filament networks and bundles and contribute to various cellular functions including regulation of cell migration, cell morphology, and endocytosis. Phosphatidylinositol 3-kinase-associated protein (PI3KAP)/XB130 has been reported to be localized to actin filaments (F-actin) and required for cell migration in thyroid carcinoma cells. Here, we show a role for PI3KAP/XB130 as an actin-crosslinking protein. First, we found that the carboxyl terminal region of PI3KAP/XB130 containing amino acid residues 830-840 was required and sufficient for localization to F-actin in NIH3T3 cells, and this region is directly bound to F-actin in vitro. Moreover, actin-crosslinking assay revealed that recombinant PI3KAP/XB130 crosslinked F-actin. In general, actin-crosslinking proteins often multimerize to assemble multiple actin-binding sites. We then investigated whether PI3KAP/XB130 could form a multimer. Blue native-PAGE analysis showed that recombinant PI3KAP/XB130 was detected at 250-1200 kDa although the molecular mass was approximately 125 kDa, suggesting that PI3KAP/XB130 formed multimers. Furthermore, we found that the amino terminal 40 amino acids were required for this multimerization by co-immunoprecipitation assay in HEK293T cells. Deletion mutants of PI3KAP/XB130 lacking the actin-binding region or the multimerizing region did not crosslink actin filaments, indicating that actin binding and multimerization of PI3KAP/XB130 were necessary to crosslink F-actin. Finally, we examined roles of PI3KAP/XB130 on endocytosis, an actin-related biological process. Overexpression of PI3KAP/XB130 enhanced dextran uptake in HEK 293 cells. However, most of the cells transfected with the deletion mutant lacking the actin-binding region incorporated dextran to a similar extent as control cells. Taken together, these results demonstrate that PI3KAP/XB130 crosslinks F-actin through both its actin-binding region and multimerizing region and plays
Yamanaka, Daisuke; Akama, Takeshi; Chida, Kazuhiro; Minami, Shiro; Ito, Koichi; Hakuno, Fumihiko; Takahashi, Shin-Ichiro
2016-01-01
Actin-crosslinking proteins control actin filament networks and bundles and contribute to various cellular functions including regulation of cell migration, cell morphology, and endocytosis. Phosphatidylinositol 3-kinase-associated protein (PI3KAP)/XB130 has been reported to be localized to actin filaments (F-actin) and required for cell migration in thyroid carcinoma cells. Here, we show a role for PI3KAP/XB130 as an actin-crosslinking protein. First, we found that the carboxyl terminal region of PI3KAP/XB130 containing amino acid residues 830–840 was required and sufficient for localization to F-actin in NIH3T3 cells, and this region is directly bound to F-actin in vitro. Moreover, actin-crosslinking assay revealed that recombinant PI3KAP/XB130 crosslinked F-actin. In general, actin-crosslinking proteins often multimerize to assemble multiple actin-binding sites. We then investigated whether PI3KAP/XB130 could form a multimer. Blue native-PAGE analysis showed that recombinant PI3KAP/XB130 was detected at 250–1200 kDa although the molecular mass was approximately 125 kDa, suggesting that PI3KAP/XB130 formed multimers. Furthermore, we found that the amino terminal 40 amino acids were required for this multimerization by co-immunoprecipitation assay in HEK293T cells. Deletion mutants of PI3KAP/XB130 lacking the actin-binding region or the multimerizing region did not crosslink actin filaments, indicating that actin binding and multimerization of PI3KAP/XB130 were necessary to crosslink F-actin. Finally, we examined roles of PI3KAP/XB130 on endocytosis, an actin-related biological process. Overexpression of PI3KAP/XB130 enhanced dextran uptake in HEK 293 cells. However, most of the cells transfected with the deletion mutant lacking the actin-binding region incorporated dextran to a similar extent as control cells. Taken together, these results demonstrate that PI3KAP/XB130 crosslinks F-actin through both its actin-binding region and multimerizing region and
NASA Astrophysics Data System (ADS)
Barnard, R.; Primini, F.; Garcia, M. R.; Kolb, U. C.; Murray, S. S.
2015-04-01
CXOM31 J004252.030+413107.87 is one of the brightest X-ray sources within the D25 region of M31, and associated with a globular cluster known as B135; we therefore call this X-ray source XB135. XB135 is a low-mass X-ray binary (LMXB) that apparently exhibited hard state characteristics at 0.3-10 keV luminosities 4-6× {{10}38} erg s-1, and the hard state is only observed below ˜10% Eddington. If true, the accretor would be a high-mass black hole (BH) (≳50 {{M}⊙ }); such a BH may be formed from direct collapse of a metal-poor, high-mass star, and the very low metallicity of B135 (0.015 {{Z}⊙ }) makes such a scenario plausible. We have obtained new XMM-Newton and Chandra HRC observations to shed light on the nature of this object. We find from the HRC observation that XB135 is a single point source located close to the center of B135. The new XMM-Newton spectrum is consistent with a rapidly spinning ˜10-20 {{M}⊙ } BH in the steep power law or thermal dominant state, but inconsistent with the hard state that we previously assumed. We cannot formally reject three component emission models that have been associated with high luminosity neutron star (NS) LMXBs (known as Z-sources); however, we prefer a BH accretor. We note that deeper observation of XB135 could discriminate against an NS accretor.
Optimization of permanent magnetic properties in melt spun Co82-xHf12+xB6 (x = 0-4) nanocomposites
NASA Astrophysics Data System (ADS)
Chang, H. W.; Liao, M. C.; Shih, C. W.; Chang, W. C.; Shaw, C. C.
2015-05-01
Magnetic properties of melt spun Co82-xHf12+xB6 ribbons made with various wheel speeds have been studied. The ribbons with x = 0-1 are not easy to crystallize and thus display soft magnetic behavior even at wheel speed of 10 m/s. In contrast, the ribbons with x = 1.5-4 at optimized wheel speed exhibit good permanent magnetic properties of Br = 0.41-0.59 T, iHc = 120-400 kA/m, and (BH)max = 10.6-48.1 kJ/m3. The optimal magnetic properties of Br = 0.59 T, iHc = 384 kA/m, and (BH)max = 48.1 kJ/m3 are achieved for Co80Hf14B6 ribbons at wheel speed of 30 m/s. X-ray diffraction, thermo-magnetic analysis, and transmission electron microscopy results show that good hard magnetic properties of Co82-xHf12+xB6 ribbons (x = 2-4) are originated from the Co11Hf2 phase well coupled with the Co phase. The change of magnetic properties for Co82-xHf12+xB6 ribbons spun at various wheel speeds is correlated to microstructure and phase constitution. The strong exchange-coupling effect between magnetic grains for the ribbons with x = 2-3 at wheel speed = 30 m/s leads to remarkable permanent magnetic properties. The presented results suggest that the optimized Co82-xHf12+xB6 (x = 2-3) ribbons are much suitable than others (x = 0-1.5 and 4) for making rare earth and Pt-free magnets.
Strong electron-phonon coupling in Be{1-x}B{2}C{2}: ab initio studies
NASA Astrophysics Data System (ADS)
Moudden, A. H.
2008-07-01
Several structures for off-stoichiometric beryllium diboride dicarbide Be{1-x}B2C2 have been designed, and their properties studied from first-principles density functional methods. Among the most stable phases examined, the layered hexagonal structures are shown to exhibit various features in the electronic properties and in the lattice dynamics reminiscent of the superconducting magnesium diboride and alkaline earth-intercalated graphites. For substoichiometric composition x˜ 1/3, the system is found metallic with a moderately strong electron-phonon coupling through a predominant contribution arising from high frequency streching modes modulating the σ-bonding of the B C network, and a weaker contribution at medium frequency range of the phonon spectra, arising from the intercalent motion coupled to the π-bonding states. Further, anharmonicities emerging from the proximity of the Fermi level to the σ-band edge, contributes to reduce the phonon softening hence stabilizing the structure. All these effects appear to combine favourably to produce a high temperature phonon-superconductivity.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
NASA Astrophysics Data System (ADS)
Gutowski, J.
1985-03-01
In a previous paper [R. Baumert, I. Broser, J. Gutowski, and A. Hoffman, Phys. Rev. B 27, 6263 (1983)] it has been shown that high-density, high-resolution excitation spectroscopy gives new information on the electronic and vibronic excited states of the acceptor-bound-exciton complex (A0,XA) with two holes from the A valence band in CdS. We now report on corresponding results for the (A0,XB) configuration which includes one hole from the second B valence band. This complex is unstable for a very fast B-->A hole conversion, and therefore gives rise to a set of excitation resonances of the I1 luminescence arising from the (A0,XA) recombination. A detailed theoretical analysis of the energetic structure of the (A0,XB) complex including the dependence on the excitation intensity and on an applied magnetic field allows the correct assignment of the excitation resonances to the (A0,XB) fine-structure levels originating from the interparticle-exchange interactions. It is shown that the magnetic field is a suitable means of distinguishing the different (A0,XB) ground-state levels. The magnetic field also creates allowed transitions which are dipole forbidden in the zero-field case. A self-contained model of the (A0,XB) complex thus can be developed, including all symmetry states and yielding adequate values for the exchange energies within the complex.
NASA Technical Reports Server (NTRS)
Fleming, WIlliam A.; Dietz, Robert O., Jr.
1957-01-01
The performance characteristics of the 19B-8 and 19XB-1 turbojet engines and the windmilling-drag characteristics of the 19B-6 engine were determined in the Cleveland altitude wind tunnel. The investigations were conducted on the 19B-8 engine at simulated altitudes from 5000 to 25,000 feet with various free-stream ram-pressure ratios and on the 19XB--1 engine at simulated altitudes from 5000 to 30,000 feet with approximately static free-stream conditions. Data for these two engines are presented to show the effect of altitude, free-stream ram-pressure ratio, and tail-pipe-nozzle area on engine performance. A 21-percent reduction in tail-pipe-nozzle area of the 19B-8 engine increased the let thrust 43 percent the net thrust 72 percent, and the fuel consumption 64 percent. An increase in free-stream ram-pressure ratio raised the jet thrust and the air flow and lowered the net thrust throughout the entire range of engine speeds for the 19B-8 engine. At similar operating conditions, the corrected jet thrust and corrected air flow were approximately the same for both engines, and the corrected specific fuel consumption based on jet thrust was lower for the 19XB-1 engine than for the 19B-8 engine. The thrust and air-flow data obtained with both engines at various altitudes for a given free-stream rampressure ratio were generalized to standard sea-level atmospheric conditions. The performance parameters involving fuel consumption generalized only at high engine speeds at simulated altitudes as high as 15,000 feet. The windmilling drag of the 19B-8 engine increased rapidly as the airspeed was increased.
NASA Astrophysics Data System (ADS)
Sluchanko, N. E.; Azarevich, A. N.; Anisimov, M. A.; Bogach, A. V.; Gavrilkin, S. Yu.; Gilmanov, M. I.; Glushkov, V. V.; Demishev, S. V.; Khoroshilov, A. L.; Dukhnenko, A. V.; Mitsen, K. V.; Shitsevalova, N. Yu.; Filippov, V. B.; Voronov, V. V.; Flachbart, K.
2016-02-01
Based on low-temperature resistivity, heat capacity, and magnetization investigations, we show that the unusually strong suppression of superconductivity in LuxZr1 -xB12 (x <8 % ) BCS-type superconductors is caused by the emergence of static spin polarization in the vicinity of nonmagnetic lutetium impurities. The analysis of the obtained results points to a formation of static magnetic moments with μeff≈6 μB per Lu3 + ion (1S0 ground state, 4 f14 configuration) incorporated in the superconducting ZrB12 matrix. The size of these spin-polarized nanodomains was estimated to be about 5 Å.
NASA Astrophysics Data System (ADS)
Abbasi, Mustafa; Sadeghi, Yahya; Sobhanian, Samad; Asgarian, Mohammad Ali
2016-03-01
The electron Bernstein wave (EBW) is typically the only wave in the electron cyclotron (EC) range that can be applied in spherical tokamaks for heating and current drive (H&CD). Spherical tokamaks (STs) operate generally in high- β regimes, in which the usual EC ordinary (O) and extraordinary (X) modes are cut off. As it was recently investigated the existence of EBWs at nonlinear regime thus the next step would be the probable nonlinear phenomena study which are predicted to be occurred within the high levels of injected power. In this regard, parametric instabilities are considered as the major channels for losses at the X-B conversion. Hence, we have to consider their effects at the UHR region which can reduce the X-B conversion efficiency. In the case of EBW heating (EBH) at high power density, the nonlinear effects can arise. Particularly at the UHR position, the group velocity is strongly reduced, which creates a high energy density and subsequently a high amplitude electric field. Therefore, a part of the input wave can decay into daughter waves via parametric instability (PI). Thus, via the present research, the excitations of ion Bernstein waves as the dominant decay channels are investigated and also an estimate for the threshold power in terms of experimental parameters related to the fundamental mode of instability is proposed.
NASA Astrophysics Data System (ADS)
Jakubowicz, J.; Le Breton, J.-M.
2006-06-01
Nanocrystalline (Nd,Dy) 16(Fe,Co) 76-xTi xB 8 magnets were prepared by mechanical alloying and respective heat treatment at 973-1073 K/30-60 min. An addition of 0.5 at % of Ti results in an increase of coercivity from 796 to 1115 kA m -1. Partial substitution of Nd by Dy results in an additional increase of coercivity up to 1234 kA m -1. Mössbauer investigations shows that for x⩽1 the (Nd,Dy) 16(Fe,Co) 76-xTi xB 8 powders are single phase. For higher Ti contents ( x>1) the mechanically alloyed powders heat treated at 973 K are no more single phase, and the coercivity decreases due to the presence of an amorphous phase. A heat treatment at a higher temperature (1073 K) for longer time (1 h) results in the full recrystallisation of powders. The mean hyperfine field of the Nd 2Fe 14B phase decreases for titanium contents of 0⩽ x⩽1, and remains constant for x>1. This indicates that the Ti content in the Nd 2Fe 14B phase reaches its maximum value.
Search for the Xb and other hidden-beauty states in the π+π- ϒ (1 S) channel at ATLAS
NASA Astrophysics Data System (ADS)
Aad, G.; Abbott, B.; Abdallah, J.; Abdel Khalek, S.; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Agatonovic-Jovin, T.; Aguilar-Saavedra, J. A.; Agustoni, M.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alio, L.; Alison, J.; Allbrooke, B. M. M.; Allison, L. J.; Allport, P. P.; Almond, J.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Altheimer, A.; Alvarez Gonzalez, B.; Alviggi, M. G.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnal, V.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Avolio, G.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Baas, A. E.; Bacci, C.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Backus Mayes, J.; Badescu, E.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Balek, P.; Balli, F.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Bansal, V.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Bartsch, V.; Bassalat, A.; Basye, A.; Bates, R. L.; Batley, J. R.; Battaglia, M.; Battistin, M.; Bauer, F.; Bawa, H. S.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, S.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bedikian, S.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, K.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez Garcia, J. A.; Benjamin, D. P.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Beringer, J.; Bernard, C.; Bernat, P.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertsche, C.; Bertsche, D.; Besana, M. I.; Besjes, G. J.; Bessidskaia, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Bierwagen, K.; Biesiada, J.; Biglietti, M.; Bilbao De Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boddy, C. R.; Boehler, M.; Boek, T. T.; Bogaerts, J. A.; Bogdanchikov, A. G.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Borri, M.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutouil, S.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozic, I.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brazzale, S. F.; Brelier, B.; Brendlinger, K.; Brennan, A. J.; Brenner, R.; Bressler, S.; Bristow, K.; Bristow, T. M.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Bromberg, C.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Brown, J.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bryngemark, L.; Buanes, T.; Buat, Q.
2015-01-01
This Letter presents a search for a hidden-beauty counterpart of the X (3872) in the mass ranges of 10.05-10.31 GeV and 10.40-11.00 GeV, in the channel Xb →π+π- ϒ (1 S) (→μ+μ-), using 16.2 fb-1 of √{ s} = 8 TeVpp collision data collected by the ATLAS detector at the LHC. No evidence for new narrow states is found, and upper limits are set on the product of the Xb cross section and branching fraction, relative to those of the ϒ (2S), at the 95% confidence level using the CLS approach. These limits range from 0.8% to 4.0%, depending on mass. For masses above 10.1 GeV, the expected upper limits from this analysis are the most restrictive to date. Searches for production of the ϒ (13DJ), ϒ (10 860), and ϒ (11 020) states also reveal no significant signals.
NASA Astrophysics Data System (ADS)
Sluchanko, N. E.; Khoroshilov, A. L.; Anisimov, M. A.; Azarevich, A. N.; Bogach, A. V.; Glushkov, V. V.; Demishev, S. V.; Krasnorussky, V. N.; Samarin, N. A.; Shitsevalova, N. Yu.; Filippov, V. B.; Levchenko, A. V.; Pristas, G.; Gabani, S.; Flachbart, K.
2015-06-01
The magnetoresistance (MR) Δ ρ /ρ of the cage-glass compound HoxLu1 -xB12 with various concentrations of magnetic holmium ions (x ≤0.5 ) has been studied in detail concurrently with magnetization M (T ) and Hall effect investigations on high-quality single crystals at temperatures 1.9-120 K and in magnetic field up to 80 kOe. The undertaken analysis of Δ ρ /ρ allows us to conclude that the large negative magnetoresistance (nMR) observed in the vicinity of the Néel temperature is caused by scattering of charge carriers on magnetic clusters of Ho3 + ions, and that these nanosize regions with antiferromagnetic (AF) exchange inside may be considered as short-range-order AF domains. It was shown that the Yosida relation -Δ ρ /ρ ˜M2 provides an adequate description of the nMR effect for the case of Langevin-type behavior of magnetization. Moreover, a reduction of Ho-ion effective magnetic moments in the range 3-9 μB was found to develop both with temperature lowering and under the increase of holmium content. A phenomenological description of the large positive quadratic contribution Δ ρ /ρ ˜μD2H2 which dominates in HoxLu1 -xB12 in the intermediate temperature range 20-120 K allows us to estimate the drift mobility exponential changes μD˜T-α with α =1.3 -1.6 depending on Ho concentration. An even more comprehensive behavior of magnetoresistance has been found in the AF state of HoxLu1 -xB12 where an additional linear positive component was observed and attributed to charge-carrier scattering on the spin density wave (SDW). High-precision measurements of Δ ρ /ρ =f (H ,T ) have allowed us also to reconstruct the magnetic H-T phase diagram of Ho0.5Lu0.5B12 and to resolve its magnetic structure as a superposition of 4 f (based on localized moments) and 5 d (based on SDW) components.
NASA Technical Reports Server (NTRS)
Bennett, Charles V.
1947-01-01
An investigation of the low-speed, power-off stability and control characteristics of a 1/20-scale model of the Consolidated Vultee XB-53 airplane has been conducted in the Langley free-flight tunnel. In the investigation it was found that with flaps neutral satisfactory flight behavior at low speeds was obtainable with an increase in height of the vertical tail and with the inboard slats opened. In the flap-down slat-open condition the longitudinal stability was satisfactory, but it was impossible to obtain satisfactory lateral-flight characteristics even with the increase in height of the vertical tail because of the negative effective dihedral, low directional stability, and large-adverse yawing moments of the ailerons.
NASA Technical Reports Server (NTRS)
Wolowicz, C. H.; Yancey, R. B.
1973-01-01
Preliminary correlations of flight-determined and predicted stability and control characteristics of the XB-70-1 reported in NASA TN D-4578 were subject to uncertainties in several areas which necessitated a review of prediction techniques particularly for the longitudinal characteristics. Reevaluation and updating of the original predictions, including aeroelastic corrections, for six specific flight-test conditions resulted in improved correlations of static pitch stability with flight data. The original predictions for the pitch-damping derivative, on the other hand, showed better correlation with flight data than the updated predictions. It appears that additional study is required in the application of aeroelastic corrections to rigid model wind-tunnel data and the theoretical determination of dynamic derivatives for this class of aircraft.
NASA Astrophysics Data System (ADS)
Liao, Chang-Zhong; Dong, Cheng; Shih, Kaimin; Zeng, Lingmin; He, Bing; Cao, Wenhuan; Yang, Lihong
2015-03-01
In recent years, the materials in the B-Mg-Ni system have been intensively studied due to their excellent properties of hydrogen storage and superconductivity. Solving the crystal structure of phases in this system will facilitate an understanding of the mechanism of their physical properties. In this study, we report the preparation, crystal structure and physical properties of a new ternary phase Mg3+xNi7-xB2 in the B-Mg-Ni system. The Mg3+xNi7-xB2 phase was prepared by solid-state reactions at 1073 K and its crystal structure was determined and refined using X-ray powder diffraction data. The Mg3+xNi7-xB2 phase crystallizes in the Ca3Ni7B2 structure type (space group R-3m, no. 166) with a=4.9496(3)-5.0105(6) Å, c=20.480(1)-20.581(1) Å depending on the x value, where x varies from 0.17 to 0.66. Two samples with nominal compositions Mg10Ni20B6 and Mg12Ni18B6 were characterized by magnetization and electric resistivity measurements in the temperature range from 5 K to room temperature. Both samples exhibited metallic behavior and showed spin-glass-like behavior with a spin freezing temperature (Tf) around 33 K. A study of the Cu-doping effect showed that limited Cu content can be doped into the Mg3+xNi7-xB2 compound and Tf decreases as the Cu content increases.
NASA Technical Reports Server (NTRS)
Tinetti, Ana F.; Maglieri, Domenic J.; Driver, Cornelius; Bobbitt, Percy J.
2011-01-01
A detailed geometric description, in wave drag format, has been developed for the Convair B-58 and North American XB-70-1 delta wing airplanes. These descriptions have been placed on electronic files, the contents of which are described in this paper They are intended for use in wave drag and sonic boom calculations. Included in the electronic file and in the present paper are photographs and 3-view drawings of the two airplanes, tabulated geometric descriptions of each vehicle and its components, and comparisons of the electronic file outputs with existing data. The comparisons include a pictorial of the two airplanes based on the present geometric descriptions, and cross-sectional area distributions for both the normal Mach cuts and oblique Mach cuts above and below the vehicles. Good correlation exists between the area distributions generated in the late 1950s and 1960s and the present files. The availability of these electronic files facilitates further validation of sonic boom prediction codes through the use of two existing data bases on these airplanes, which were acquired in the 1960s and have not been fully exploited.
Competing anisotropies on 3d sub-lattice of YNi{sub 4–x}Co{sub x}B compounds
Caraballo Vivas, R. J.; Rocco, D. L.; Reis, M. S.; Caldeira, L.; Coelho, A. A.
2014-08-14
The magnetic anisotropy of 3d sub-lattices has an important rule on the overall magnetic properties of hard magnets. Intermetallics alloys with boron (R-Co/Ni-B, for instance) belong to those hard magnets family and are useful objects to help to understand the magnetic behavior of 3d sub-lattice, specially when the rare earth ions R do not have magnetic nature, like YCo{sub 4}B ferromagnetic material. Interestingly, YNi{sub 4}B is a paramagnetic material and Ni ions do not contribute to the magnetic anisotropy. We focused therefore our attention to YNi{sub 4–x}Co{sub x}B series, with x = 0, 1, 2, 3, and 4. The magnetic anisotropy of these compounds is deeper described using statistical and preferential models of Co occupation among the possible Wyckoff positions into the CeCo{sub 4}B type hexagonal structure. We found that the preferential model is the most suitable to explain the magnetization experimental data.
NASA Astrophysics Data System (ADS)
Yue, Ming; Zhang, Jiuxing; Tian, Meng; Liu, X. B.
2006-04-01
Nd2Fe14B/α-Fe isotropic bulk nanocomposite magnets were prepared by spark plasma sintering (SPS) technique using melt-spun powders with a nominated composition of NdxFe94-xB6, with x=6, 8, and 10. It was found that higher sintering temperature improved the densification of the magnets, while it deteriorated their magnetic properties simultaneously due to the excess crystal grain growth. An increased compressive pressure led to better magnetic properties and higher density for the SPS magnets. An increase in the Nd amount resulted in a gradual increase in intrinsic coercivity and an obvious reduction of the remanence of the magnets simultaneously. A magnet with the composition of Nd8Fe86B6 possessed a Br of 0.99 T, a Hci of 386 kA/m, and a (BH)max of 101 kJ/m3 under the optimal sintering condition. In addition, microstructure observation using transmission electron microscopy showed that compared with the starting powders the full-density magnets nearly maintain the morphology, indicating that there was no sign of pronounced crystal grain growth during the densification process.
A 2.15 hr ORBITAL PERIOD FOR THE LOW-MASS X-RAY BINARY XB 1832-330 IN THE GLOBULAR CLUSTER NGC 6652
Engel, M. C.; Heinke, C. O.; Sivakoff, G. R.; Elshamouty, K. G.; Edmonds, P. D. E-mail: heinke@ualberta.ca
2012-03-10
We present a candidate orbital period for the low-mass X-ray binary (LMXB) XB 1832-330 in the globular cluster NGC 6652 using a 6.5 hr Gemini South observation of the optical counterpart of the system. Light curves in g' and r' for two LMXBs in the cluster, sources A and B in previous literature, were extracted and analyzed for periodicity using the ISIS image subtraction package. A clear sinusoidal modulation is evident in both of A's curves, of amplitude {approx}0.11 mag in g' and {approx}0.065 mag in r', while B's curves exhibit rapid flickering, of amplitude {approx}1 mag in g' and {approx}0.5 mag in r'. A Lomb-Scargle test revealed a 2.15 hr periodic variation in the magnitude of A with a false alarm probability less than 10{sup -11}, and no significant periodicity in the light curve for B. Though it is possible that saturated stars in the vicinity of our sources partially contaminated our signal, the identification of A's binary period is nonetheless robust.
NASA Technical Reports Server (NTRS)
Cahill, Jones F.
1947-01-01
An investigation was made in the Langley two-dimensional low-turbulence tunnel on a wing section for the XB-36 airplane equipped with a double slotted flap to determine the effect on lift and drag of various slot-entry skirt extension. A skirt extension of 0.787 deg. was found to provide the best combination of high maximum lift with flap deflected and law drag with flap retracted. The data showed that the maximum lift at intermediate (20 deg. to 45 deg.) flap deflections was lowered considerably by the slot-entry extension; but at high flap deflections the effect was small. An increase in Reynolds number from 2.4 million to 6.0 million increased the maximum.lift coefficient at a flap deflection of 55 deg. from 3.12 to 3.30 and from 1.18 to 1.40 for the flap retracted condition, but did not greatly affect the maximum lift coefficient for intermediate flap deflections. The flap and fore flap load data indicated that the maximum lift coefficients at high flap deflections are limited by a breakdown in the flow over the .flaps.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
NASA Technical Reports Server (NTRS)
Boyd, Bemrose
1948-01-01
Pressure losses through the combustion chamber and the combustion efficiency of the 19B-2 and 19B-8 jet-propulsion engines and the combustion efficiency of the 19XB-1 jet-propulsion engine are presented.Data were obtained from an investigation of the complete engine in the NACA Cleveland altitude wind tunnel over a range of simulated altitudes from 5000 to 30,000 feet and tunnel Mach numbers from less than 0.100 to 0.455. The combustion-chamber pressure loss due to friction was higher for the 19B-2 combustion chamber than for the 19B-8. The 19B-2 combustion chamber had a screen of 40-percent open area interposed between the compressor outlet and the combustion-chamber inlet. The screen for the 19B-8 combustion chamber had a 60-percent open area, which except for a small difference in tail-pipe-nozzle outlet area represents the only point of difference between the standard 19B-2 and 19B-8 combustion chambers. The pressure loss due to heat addition to the flowing gases in the combustion chamber was approximately the same for the 19B-2 and 19B-8 configurations. Altitude and tunnel Mach number had no significant effect on the over-all total-pressure loss through the combustion chamber. A decrease in tail-pipe-nozzle outlet area (tail cone out) resulted in a decrease in combustion-chamber total-pressure loss at high engine speeds.
Swift Reveals a ~5.7 Day Super-orbital Period in the M31 Globular Cluster X-Ray Binary XB158
NASA Astrophysics Data System (ADS)
Barnard, R.; Garcia, M. R.; Murray, S. S.
2015-03-01
The M31 globular cluster X-ray binary XB158 (a.k.a. Bo 158) exhibits intensity dips on a 2.78 hr period in some observations, but not others. The short period suggests a low mass ratio, and an asymmetric, precessing disk due to additional tidal torques from the donor star since the disk crosses the 3:1 resonance. Previous theoretical three-dimensional smoothed particle hydrodynamical modeling suggested a super-orbital disk precession period 29 ± 1 times the orbital period, i.e., ~81 ± 3 hr. We conducted a Swift monitoring campaign of 30 observations over ~1 month in order to search for evidence of such a super-orbital period. Fitting the 0.3-10 keV Swift X-Ray Telescope luminosity light curve with a sinusoid yielded a period of 5.65 ± 0.05 days, and a >5σ improvement in χ2 over the best fit constant intensity model. A Lomb-Scargle periodogram revealed that periods of 5.4-5.8 days were detected at a >3σ level, with a peak at 5.6 days. We consider this strong evidence for a 5.65 day super-orbital period, ~70% longer than the predicted period. The 0.3-10 keV luminosity varied by a factor of ~5, consistent with variations seen in long-term monitoring from Chandra. We conclude that other X-ray binaries exhibiting similar long-term behavior are likely to also be X-ray binaries with low mass ratios and super-orbital periods.
He, TianWei; Jiang, YeHua E-mail: jfeng@seas.harvard.edu; Zhou, Rong; Feng, Jing E-mail: jfeng@seas.harvard.edu
2015-08-21
The mechanical properties, electronic structure and thermodynamic properties of the Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides were calculated by first-principles methods. The elastic constants show that these ternary borides are mechanically stable. Formation enthalpy of Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides are at the range of −118.09 kJ/mol to −40.14 kJ/mol. The electronic structures and chemical bonding characteristics are analyzed by the density of states. Mo{sub 2}FeB{sub 2} has the largest shear and Young's modulus because of its strong chemical bonding, and the values are 204.3 GPa and 500.3 GPa, respectively. MoCo{sub 2}B{sub 4} shows the lowest degree of anisotropy due to the lack of strong direction in the bonding. The Debye temperature of MoFe{sub 2}B{sub 4} is the largest among the six phases, which means that MoFe{sub 2}B{sub 4} possesses the best thermal conductivity. Enthalpy shows an approximately linear function of the temperature above 300 K. The entropy of these compounds increase rapidly when the temperature is below 450 K. The Gibbs free energy decreases with the increase in temperature. MoCo{sub 2}B{sub 4} has the lowest Gibbs free energy, which indicates the strongest formation ability in Mo{sub 2}XB{sub 2} and MoX{sub 2}B{sub 4} (X = Fe, Co, Ni) ternary borides.
NASA Technical Reports Server (NTRS)
Arnaiz, H. H.; Peterson, J. B., Jr.; Daugherty, J. C.
1980-01-01
A program was undertaken by NASA to evaluate the accuracy of a method for predicting the aerodynamic characteristics of large supersonic cruise airplanes. This program compared predicted and flight-measured lift, drag, angle of attack, and control surface deflection for the XB-70-1 airplane for 14 flight conditions with a Mach number range from 0.76 to 2.56. The predictions were derived from the wind-tunnel test data of a 0.03-scale model of the XB-70-1 airplane fabricated to represent the aeroelastically deformed shape at a 2.5 Mach number cruise condition. Corrections for shape variations at the other Mach numbers were included in the prediction. For most cases, differences between predicted and measured values were within the accuracy of the comparison. However, there were significant differences at transonic Mach numbers. At a Mach number of 1.06 differences were as large as 27 percent in the drag coefficients and 20 deg in the elevator deflections. A brief analysis indicated that a significant part of the difference between drag coefficients was due to the incorrect prediction of the control surface deflection required to trim the airplane.
NASA Astrophysics Data System (ADS)
Giri, Jyotsnendu; Pradhan, Pallab; Somani, Vaibhav; Chelawat, Hitesh; Chhatre, Shreerang; Banerjee, Rinti; Bahadur, Dhirendra
Nanomagnetic particles have great potential in the biomedical applications like MRI contrast enhancement, magnetic separation, targeting delivery and hyperthermia. In this paper, we have explored the possibility of biomedical applications of [Fe 1-xB xFe 2O 4, B=Mn, Co] ferrite. Superparamagnetic particles of substituted ferrites [Fe 1-xB xFe 2O 4, B=Mn, Co ( x=0-1)] and their fatty acid coated water base ferrofluids have been successfully prepared by co-precipitation technique using NH4OH/TMAH (Tetramethylammonium hydroxide) as base. In vitro cytocompatibility study of different magnetic fluids was done using HeLa (human cervical carcinoma) cell lines. Co 2+-substituted ferrite systems (e.g. CoFe 2O 4) is more toxic than Mn 2+-substituted ferrite systems (e.g. MnFe 2O 4, Fe 0.6Mn 0.4Fe 2O 4). The later is as cytocompatible as Fe 3O 4. Thus, Fe 1-xMn xFe 2O 4 could be useful in biomedical applications like MRI contrast agent and hyperthermia treatment of cancer.
Sci—Thur AM: YIS - 05: 10X-FFF VMAT for Lung SABR: an Investigation of Peripheral Dose
Mader, J; Mestrovic, A
2014-08-15
Flattening Filter Free (FFF) beams exhibit high dose rates, reduced head scatter, leaf transmission and leakage radiation. For VMAT lung SABR, treatment time can be significantly reduced using high dose rate FFF beams while maintaining plan quality and accuracy. Another possible advantage offered by FFF beams for VMAT lung SABR is the reduction in peripheral dose. The focus of this study was to investigate and quantify the reduction of peripheral dose offered by FFF beams for VMAT lung SABR. The peripheral doses delivered by VMAT Lung SABR treatments using FFF and flattened beams were investigated for the Varian Truebeam linac. This study was conducted in three stages, (1): ion chamber measurement of peripheral dose for various plans, (2): validation of AAA, Acuros XB and Monte Carlo for peripheral dose using measured data, and (3): using the validated Monte Carlo model to evaluate peripheral doses for 6 VMAT lung SABR treatments. Three energies, 6X, 10X, and 10X-FFF were used for all stages. Measured data indicates that 10X-FFF delivers the lowest peripheral dose of the three energies studied. AAA and Acuros XB dose calculation algorithms were identified as inadequate, and Monte Carlo was validated for accurate peripheral dose prediction. The Monte Carlo-calculated VMAT lung SABR plans show a significant reduction in peripheral dose for 10X-FFF plans compared to the standard 6X plans, while no significant reduction was showed when compared to 10X. This reduction combined with shorter treatment time makes 10X-FFF beams the optimal choice for superior VMAT lung SABR treatments.
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
NASA Technical Reports Server (NTRS)
Peterson, J. B., Jr.; Mann, M. J.; Sorrells, R. B., III; Sawyer, W. C.; Fuller, D. E.
1980-01-01
The results of calculations necessary to extrapolate performance data on an XB-70-1 wind tunnel model to full scale at Mach numbers from 0.76 to 2.53 are presented. The extrapolation was part of a joint program to evaluate performance prediction techniques for large flexible supersonic airplanes similar to a supersonic transport. The extrapolation procedure included: interpolation of the wind tunnel data at the specific conditions of the flight test points; determination of the drag increments to be applied to the wind tunnel data, such as spillage drag, boundary layer trip drag, and skin friction increments; and estimates of the drag items not represented on the wind tunnel model, such as bypass doors, roughness, protuberances, and leakage drag. In addition, estimates of the effects of flexibility of the airplane were determined.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
NASA Astrophysics Data System (ADS)
Alling, B.; Högberg, H.; Armiento, R.; Rosen, J.; Hultman, L.
2015-05-01
Transition metal diborides are ceramic materials with potential applications as hard protective thin films and electrical contact materials. We investigate the possibility to obtain age hardening through isostructural clustering, including spinodal decomposition, or ordering-induced precipitation in ternary diboride alloys. By means of first-principles mixing thermodynamics calculations, 45 ternary M11-xM2xB2 alloys comprising MiB2 (Mi = Mg, Al, Sc, Y, Ti, Zr, Hf, V, Nb, Ta) with AlB2 type structure are studied. In particular Al1-xTixB2 is found to be of interest for coherent isostructural decomposition with a strong driving force for phase separation, while having almost concentration independent a and c lattice parameters. The results are explained by revealing the nature of the electronic structure in these alloys, and in particular, the origin of the pseudogap at EF in TiB2, ZrB2, and HfB2.
Sdiri, N; Elhouichet, H; Dhaou, H; Mokhtar, F
2014-01-01
90%[xB2O3 (1-x) P2O5] 10%Fe2O3, glass systems where (x=0 mol%, 5 mol%, 10 mol%, 15 mol%, 20 mol%) was prepared via a melt quenching technique. The structure of glass is investigated at room temperature by, Raman and EPR spectroscopy. Raman studies have been performed on these glasses to examine the distribution of different borate and phosphate structural groups. We have noted an increase from 3 to 4 in the coordination number of the boron atoms from 3 to 4, i.e., the conversion of the BO3 triangular structural units into BO4 tetrahedra. The samples have been investigated by means of electron paramagnetic resonance (EPR). The results obtained from the gef=4.28 EPR line are typical of the occurrence of iron (III) occupying substitutional sites. Moreover, the dielectric sizes such as ε'(ω), ε″(ω), imaginary parts of the electrical modulus, M(*)(ω) and the loss tanδ, their variation with frequency at room temperature show a decrease in relaxation intensity with an increase in the concentration of (B2O3). On the present work, we have found a weak extinction index with our new glass. PMID:23995605
Alling, B.; Högberg, H.; Armiento, R.; Rosen, J.; Hultman, L.
2015-01-01
Transition metal diborides are ceramic materials with potential applications as hard protective thin films and electrical contact materials. We investigate the possibility to obtain age hardening through isostructural clustering, including spinodal decomposition, or ordering-induced precipitation in ternary diboride alloys. By means of first-principles mixing thermodynamics calculations, 45 ternary M11–xM2xB2 alloys comprising MiB2 (Mi = Mg, Al, Sc, Y, Ti, Zr, Hf, V, Nb, Ta) with AlB2 type structure are studied. In particular Al1–xTixB2 is found to be of interest for coherent isostructural decomposition with a strong driving force for phase separation, while having almost concentration independent a and c lattice parameters. The results are explained by revealing the nature of the electronic structure in these alloys, and in particular, the origin of the pseudogap at EF in TiB2, ZrB2, and HfB2. PMID:25970763
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
NASA Astrophysics Data System (ADS)
Ludwig, Thilo; Pediaditakis, Alexis; Sagawe, Vanessa; Hillebrecht, Harald
2013-08-01
We report on the synthesis and characterisation of Mg3B36Si9C. Black single crystals of hexagonal shape were yielded from the elements at 1600 °C in h-BN crucibles welded in Ta ampoules. The crystal structure (space group R3barm, a=10.0793(13) Å, c=16.372(3) Å, 660 refl., 51 param., R1(F)=0.019; wR2(F2)=0.051) is characterized by a Kagome-net of B12 icosahedra, ethane like Si8-units and disordered SiC-dumbbells. Vibrational spectra show typical features of boron-rich borides and Zintl phases. Mg3B36Si9C is stable against HF/HNO3 and conc. NaOH. The micro-hardness is 17.0 GPa (Vickers) and 14.5 GPa (Knoop), respectively. According to simple electron counting rules Mg3B36Si9C is an electron precise compound. Band structure calculations reveal a band gap of 1.0 eV in agreement to the black colour. Interatomic distances obtained from the refinement of X-ray data are biased and falsified by the disorder of the SiC-dumbbell. The most evident structural parameters were obtained by relaxation calculation. Composition and carbon content were confirmed by WDX measurements. The small but significant carbon content is necessary by structural reasons and frequently caused by contaminations. The rare earth compounds RE3-xB36Si9C (RE=Y, Dy-Lu) are isotypic. Single crystals were grown from a silicon melt and their structures refined. The partial occupation of the RE-sites fits to the requirements of an electron-precise composition. According to the displacement parameters a relaxation should be applied to obtain correct structural parameters.
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
Library of Continuation Algorithms
Energy Science and Technology Software Center (ESTSC)
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Ionic conductivity of mixed glass former 0.35Na(2)O + 0.65[xB(2)O(3) + (1 - x)P(2)O(5)] glasses.
Christensen, Randilynn; Olson, Garrett; Martin, Steve W
2013-12-27
The mixed glass former effect (MGFE) is defined as a nonlinear and nonadditive change in the ionic conductivity with changing glass former fraction at constant modifier composition between two binary glass forming compositions. In this study, mixed glass former (MGF) sodium borophosphate glasses, 0.35Na2O + 0.65[xB2O3 + (1 - x)P2O5], 0 ≤ x ≤ 1, have been prepared, and their sodium ionic conductivity has been studied. The ionic conductivity exhibits a strong, positive MGFE that is caused by a corresponding strongly negative nonlinear, nonadditive change in the conductivity activation energy with changing glass former content, x. We describe a successful model of the MGFE in the conductivity activation energy terms of the underlying short-range order (SRO) phosphate and borate glass former structures present in these glasses. To do this, we have developed a modified Anderson-Stuart (A-S) model to explain the decrease in the activation energy in terms of the atomic level composition dependence (x) of the borate and phosphate SRO structural groups, the Na(+) ion concentration, and the Na(+) mobility. In our revision of the A-S model, we carefully improve the treatment of the cation jump distance and incorporate an effective Madelung constant to account for many body coulomb potential effects. Using our model, we are able to accurately reproduce the composition dependence of the activation energy with a single adjustable parameter, the effective Madelung constant, that changes systematically with composition, x, and varies by no more than 10% from values typical of oxide ceramics. Our model suggests that the decreasing columbic binding energies that govern the concentration of the mobile cations are sufficiently strong in these glasses to overcome the increasing volumetric strain energies (mobility) caused by strongly increasing glass-transition temperatures combined with strongly decreasing molar volumes of these glasses. The dependence of the columbic binding
Kan, Monica W. K.; Yu, Peter K. N.; Leung, Lucullus H. T.
2013-01-01
Deterministic linear Boltzmann transport equation (D-LBTE) solvers have recently been developed, and one of the latest available software codes, Acuros XB, has been implemented in a commercial treatment planning system for radiotherapy photon beam dose calculation. One of the major limitations of most commercially available model-based algorithms for photon dose calculation is the ability to account for the effect of electron transport. This induces some errors in patient dose calculations, especially near heterogeneous interfaces between low and high density media such as tissue/lung interfaces. D-LBTE solvers have a high potential of producing accurate dose distributions in and near heterogeneous media in the human body. Extensive previous investigations have proved that D-LBTE solvers were able to produce comparable dose calculation accuracy as Monte Carlo methods with a reasonable speed good enough for clinical use. The current paper reviews the dosimetric evaluations of D-LBTE solvers for external beam photon radiotherapy. This content summarizes and discusses dosimetric validations for D-LBTE solvers in both homogeneous and heterogeneous media under different circumstances and also the clinical impact on various diseases due to the conversion of dose calculation from a conventional convolution/superposition algorithm to a recently released D-LBTE solver. PMID:24066294
NASA Astrophysics Data System (ADS)
Kim, Yon-Lae; Chung, Jin-Beom; Kim, Jae-Sung; Lee, Jeong-Woo; Kim, Jin-Young; Kang, Sang-Won; Suh, Tae-Suk
2015-11-01
The purpose of this study was to test the feasibility of clinical usage of a flattening-filter-free (FFF) beam for treatment with lung stereotactic ablative radiotherapy (SABR). Ten patients were treated with SABR and a 6-MV FFF beam for this study. All plans using volumetric modulated arc therapy (VMAT) were optimized in the Eclipse treatment planning system (TPS) by using the Acuros XB (AXB) dose calculation algorithm and were delivered by using a Varian TrueBeam ™ linear accelerator equipped with a high-definition (HD) multi-leaf collimator. The prescription dose used was 48 Gy in 4 fractions. In order to compare the plan using a conventional 6-MV flattening-filter (FF) beam, the SABR plan was recalculated under the condition of the same beam settings used in the plan employing the 6-MV FFF beam. All dose distributions were calculated by using Acuros XB (AXB, version 11) and a 2.5-mm isotropic dose grid. The cumulative dosevolume histograms (DVH) for the planning target volume (PTV) and all organs at risk (OARs) were analyzed. Technical parameters, such as total monitor units (MUs) and the delivery time, were also recorded and assessed. All plans for target volumes met the planning objectives for the PTV ( i.e., V95% > 95%) and the maximum dose ( i.e., Dmax < 110%) revealing adequate target coverage for the 6-MV FF and FFF beams. Differences in DVH for target volumes (PTV and clinical target volume (CTV)) and OARs on the lung SABR plans from the interchange of the treatment beams were small, but showed a marked reduction (52.97%) in the treatment delivery time. The SABR plan with a FFF beam required a larger number of MUs than the plan with the FF beam, and the mean difference in MUs was 4.65%. This study demonstrated that the use of the FFF beam for lung SABR plan provided better treatment efficiency relative to 6-MV FF beam. This strategy should be particularly beneficial for high dose conformity to the lung and decreased intra-fraction movements because of
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
SU-D-BRB-07: Lipiodol Impact On Dose Distribution in Liver SBRT After TACE
Kawahara, D; Ozawa, S; Hioki, K; Suzuki, T; Lin, Y; Okumura, T; Ochi, Y; Nakashima, T; Ohno, Y; Kimura, T; Murakami, Y; Nagata, Y
2015-06-15
Purpose: Stereotactic body radiotherapy (SBRT) combining transarterial chemoembolization (TACE) with Lipiodol is expected to improve local control. This study aims to evaluate the impact of Lipiodol on dose distribution by comparing the dosimetric performance of the Acuros XB (AXB) algorithm, anisotropic analytical algorithm (AAA), and Monte Carlo (MC) method using a virtual heterogeneous phantom and a treatment plan for liver SBRT after TACE. Methods: The dose distributions calculated using AAA and AXB algorithm, both in Eclipse (ver. 11; Varian Medical Systems, Palo Alto, CA), and EGSnrc-MC were compared. First, the inhomogeneity correction accuracy of the AXB algorithm and AAA was evaluated by comparing the percent depth dose (PDD) obtained from the algorithms with that from the MC calculations using a virtual inhomogeneity phantom, which included water and Lipiodol. Second, the dose distribution of a liver SBRT patient treatment plan was compared between the calculation algorithms. Results In the virtual phantom, compared with the MC calculations, AAA underestimated the doses just before and in the Lipiodol region by 5.1% and 9.5%, respectively, and overestimated the doses behind the region by 6.0%. Furthermore, compared with the MC calculations, the AXB algorithm underestimated the doses just before and in the Lipiodol region by 4.5% and 10.5%, respectively, and overestimated the doses behind the region by 4.2%. In the SBRT plan, the AAA and AXB algorithm underestimated the maximum doses in the Lipiodol region by 9.0% in comparison with the MC calculations. In clinical cases, the dose enhancement in the Lipiodol region can approximately 10% increases in tumor dose without increase of dose to normal tissue. Conclusion: The MC method demonstrated a larger increase in the dose in the Lipiodol region than the AAA and AXB algorithm. Notably, dose enhancement were observed in the tumor area; this may lead to a clinical benefit.
Semioptimal practicable algorithmic cooling
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-15
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Parallel scheduling algorithms
Dekel, E.; Sahni, S.
1983-01-01
Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.
Developmental Algorithms Have Meaning!
ERIC Educational Resources Information Center
Green, John
1997-01-01
Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
Parallel algorithms and architectures
Albrecht, A.; Jung, H.; Mehlhorn, K.
1987-01-01
Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
A Simple Calculator Algorithm.
ERIC Educational Resources Information Center
Cook, Lyle; McWilliam, James
1983-01-01
The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
NASA Astrophysics Data System (ADS)
Feigin, G.; Ben-Yosef, N.
1983-10-01
A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Algorithmically specialized parallel computers
Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.
1985-01-01
This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.
Energy Science and Technology Software Center (ESTSC)
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Project resource reallocation algorithm
NASA Technical Reports Server (NTRS)
Myers, J. E.
1981-01-01
A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Programming parallel vision algorithms
Shapiro, L.G.
1988-01-01
Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
NASA Astrophysics Data System (ADS)
Vassiliev, Oleg N.; Wareing, Todd A.; McGhee, John; Failla, Gregory; Salehpour, Mohammad R.; Mourtada, Firas
2010-02-01
A new grid-based Boltzmann equation solver, Acuros™, was developed specifically for performing accurate and rapid radiotherapy dose calculations. In this study we benchmarked its performance against Monte Carlo for 6 and 18 MV photon beams in heterogeneous media. Acuros solves the coupled Boltzmann transport equations for neutral and charged particles on a locally adaptive Cartesian grid. The Acuros solver is an optimized rewrite of the general purpose Attila© software, and for comparable accuracy levels, it is roughly an order of magnitude faster than Attila. Comparisons were made between Monte Carlo (EGSnrc) and Acuros for 6 and 18 MV photon beams impinging on a slab phantom comprising tissue, bone and lung materials. To provide an accurate reference solution, Monte Carlo simulations were run to a tight statistical uncertainty (σ ≈ 0.1%) and fine resolution (1-2 mm). Acuros results were output on a 2 mm cubic voxel grid encompassing the entire phantom. Comparisons were also made for a breast treatment plan on an anthropomorphic phantom. For the slab phantom in regions where the dose exceeded 10% of the maximum dose, agreement between Acuros and Monte Carlo was within 2% of the local dose or 1 mm distance to agreement. For the breast case, agreement was within 2% of local dose or 2 mm distance to agreement in 99.9% of voxels where the dose exceeded 10% of the prescription dose. Elsewhere, in low dose regions, agreement for all cases was within 1% of the maximum dose. Since all Acuros calculations required less than 5 min on a dual-core two-processor workstation, it is efficient enough for routine clinical use. Additionally, since Acuros calculation times are only weakly dependent on the number of beams, Acuros may ideally be suited to arc therapies, where current clinical algorithms may incur long calculation times.
Energy Science and Technology Software Center (ESTSC)
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Data Structures and Algorithms.
ERIC Educational Resources Information Center
Wirth, Niklaus
1984-01-01
Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)
General cardinality genetic algorithms
Koehler; Bhattacharyya; Vose
1997-01-01
A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767
ERIC Educational Resources Information Center
Drake, Michael
2011-01-01
One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…
The Xmath Integration Algorithm
ERIC Educational Resources Information Center
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
NASA Astrophysics Data System (ADS)
Misse, Patrick R. N.; Mbarki, Mohammed; Fokwa, Boniface P. T.
2012-08-01
Powder samples and single crystals of the new complex boride series Crx(Rh1-yRuy)7-xB3 (x=0.88-1; y=0-1) have been synthesized by arc-melting the elements under purified argon atmosphere on a water-cooled copper crucible. The products, which have metallic luster, were structurally characterized by single-crystal and powder X-ray diffraction as well as EDX measurements. Within the whole solid solution range the hexagonal Th7Fe3 structure type (space group P63mc, no. 186, Z=2) was identified. Single-crystal structure refinement results indicate the presence of chromium at two sites (6c and 2b) of the available three metal Wyckoff sites, with a pronounced preference for the 6c site. An unexpected Rh/Ru site preference was found in the Ru-rich region only, leading to two different magnetic behaviors in the solid solution: The Rh-rich region shows a temperature-independent (Pauli) paramagnetism whereas an additional temperature-dependent paramagnetic component is found in the Ru-rich region.
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Tomasz Plawski, J. Hovater
2010-09-01
A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1987-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.
Algorithms, games, and evolution
Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh
2014-01-01
Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1989-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.
NASA Astrophysics Data System (ADS)
Deprit, André; Palacián, Jesúus; Deprit, Etienne
2001-03-01
The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Algorithm Visualization System for Teaching Spatial Data Algorithms
ERIC Educational Resources Information Center
Nikander, Jussi; Helminen, Juha; Korhonen, Ari
2010-01-01
TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…
NASA Astrophysics Data System (ADS)
Reda, Ibrahim; Andreas, Afshin
2015-04-01
The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.
Quantum defragmentation algorithm
Burgarth, Daniel; Giovannetti, Vittorio
2010-08-15
In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.
NOSS altimeter algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.
1982-01-01
A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.
NASA Astrophysics Data System (ADS)
Nardi, Jerry
The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.
Algorithms for builder guidelines
Balcomb, J.D.; Lekov, A.B.
1989-06-01
The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L
2013-12-01
ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766
Developing dataflow algorithms
Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)
1991-01-01
Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.
Evaluating super resolution algorithms
NASA Astrophysics Data System (ADS)
Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun
2011-01-01
This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.
Design of robust systolic algorithms
Varman, P.J.; Fussell, D.S.
1983-01-01
A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.
High-performance combinatorial algorithms
Pinar, Ali
2003-10-31
Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.
Multipartite entanglement in quantum algorithms
Bruss, D.; Macchiavello, C.
2011-05-15
We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.
Algorithm for Constructing Contour Plots
NASA Technical Reports Server (NTRS)
Johnson, W.; Silva, F.
1984-01-01
General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.
Polynomial Algorithms for Item Matching.
ERIC Educational Resources Information Center
Armstrong, Ronald D.; Jones, Douglas H.
1992-01-01
Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)
Verifying a Computer Algorithm Mathematically.
ERIC Educational Resources Information Center
Olson, Alton T.
1986-01-01
Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
Efficient multicomponent fuel algorithm
NASA Astrophysics Data System (ADS)
Torres, D. J.; O'Rourke, P. J.; Amsden, A. A.
2003-03-01
We derive equations for multicomponent fuel evaporation in airborne fuel droplets and wall films, and implement the model into KIVA-3V. Temporal and spatial variations in liquid droplet composition and temperature are not modelled but solved for by discretizing the interior of the droplet in an implicit and computationally efficient way. We find that an interior discretization is necessary to correctly compute the evolution of the droplet composition. The details of the one-dimensional numerical algorithm are described. Numerical simulations of multicomponent evaporation are performed for single droplets and compared to experimental data.
NASA Technical Reports Server (NTRS)
Vardi, A.
1984-01-01
The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.
Join-Graph Propagation Algorithms
Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina
2010-01-01
The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057
Constructive neural network learning algorithms
Parekh, R.; Yang, Jihoon; Honavar, V.
1996-12-31
Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.
NASA Technical Reports Server (NTRS)
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
NASA Astrophysics Data System (ADS)
Owen, Mark W.; Stubberud, Allen R.
2003-12-01
Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.
NASA Astrophysics Data System (ADS)
Owen, Mark W.; Stubberud, Allen R.
2004-01-01
Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.
Lee, M; Kang, S; Lee, S; Suh, T; Lee, J; Park, J; Park, H; Lee, B
2014-06-01
Purpose: Implant-supported dentures seem particularly appropriate for the predicament of becoming edentulous and cancer patients are no exceptions. As the number of people having dental implants increased in different ages, critical dosimetric verification of metal artifact effects are required for the more accurate head and neck radiation therapy. The purpose of this study is to verify the theoretical analysis of the metal(streak and dark) artifact, and to evaluate dosimetric effect which cause by dental implants in CT images of patients with the patient teeth and implants inserted humanoid phantom. Methods: The phantom comprises cylinder which is shaped to simulate the anatomical structures of a human head and neck. Through applying various clinical cases, made phantom which is closely allied to human. Developed phantom can verify two classes: (i)closed mouth (ii)opened mouth. RapidArc plans of 4 cases were created in the Eclipse planning system. Total dose of 2000 cGy in 10 fractions is prescribed to the whole planning target volume (PTV) using 6MV photon beams. Acuros XB (AXB) advanced dose calculation algorithm, Analytical Anisotropic Algorithm (AAA) and progressive resolution optimizer were used in dose optimization and calculation. Results: In closed and opened mouth phantom, because dark artifacts formed extensively around the metal implants, dose variation was relatively higher than that of streak artifacts. As the PTV was delineated on the dark regions or large streak artifact regions, maximum 7.8% dose error and average 3.2% difference was observed. The averaged minimum dose to the PTV predicted by AAA was about 5.6% higher and OARs doses are also 5.2% higher compared to AXB. Conclusion: The results of this study showed that AXB dose calculation involving high-density materials is more accurate than AAA calculation, and AXB was superior to AAA in dose predictions beyond dark artifact/air cavity portion when compared against the measurements.
Park, J; Park, H; Lee, J; Kang, S; Lee, M; Suh, T; Lee, B
2014-06-01
Purpose: Dosimetric effect and discrepancy according to the rectum definition methods and dose perturbation by air cavity in an endo-rectal balloon (ERB) were verified using rectal-wall (Rwall) dose maps considering systematic errors in dose optimization and calculation accuracy in intensity-modulated radiation treatment (IMRT) for prostate cancer patients. Methods: When the inflated ERB having average diameter of 4.5 cm and air volume of 100 cc is used for patient, Rwall doses were predicted by pencil-beam convolution (PBC), anisotropic analytic algorithm (AAA), and AcurosXB (AXB) with material assignment function. The errors of dose optimization and calculation by separating air cavity from the whole rectum (Rwhole) were verified with measured rectal doses. The Rwall doses affected by the dose perturbation of air cavity were evaluated using a featured rectal phantom allowing insert of rolled-up gafchromic films and glass rod detectors placed along the rectum perimeter. Inner and outer Rwall doses were verified with reconstructed predicted rectal wall dose maps. Dose errors and extent at dose levels were evaluated with estimated rectal toxicity. Results: While AXB showed insignificant difference of target dose coverage, Rwall doses underestimated by up to 20% in dose optimization for the Rwhole than Rwall at all dose range except for the maximum dose. As dose optimization for Rwall was applied, the Rwall doses presented dose error less than 3% between dose calculation algorithm except for overestimation of maximum rectal dose up to 5% in PBC. Dose optimization for Rwhole caused dose difference of Rwall especially at intermediate doses. Conclusion: Dose optimization for Rwall could be suggested for more accurate prediction of rectal wall dose prediction and dose perturbation effect by air cavity in IMRT for prostate cancer. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
Fighting Censorship with Algorithms
NASA Astrophysics Data System (ADS)
Mahdian, Mohammad
In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.
Trial encoding algorithms ensemble.
Cheng, Lipin Bill; Yeh, Ren Jye
2013-01-01
This paper proposes trial algorithms for some basic components in cryptography and lossless bit compression. The symmetric encryption is accomplished by mixing up randomizations and scrambling with hashing of the key playing an essential role. The digital signature is adapted from the Hill cipher with the verification key matrices incorporating un-invertible parts to hide the signature matrix. The hash is a straight running summation (addition chain) of data bytes plus some randomization. One simplified version can be burst error correcting code. The lossless bit compressor is the Shannon-Fano coding that is less optimal than the later Huffman and Arithmetic coding, but can be conveniently implemented without the use of a tree structure and improvable with bytes concatenation. PMID:27057475
Multisensor data fusion algorithm development
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
Ozone Uncertainties Study Algorithm (OUSA)
NASA Technical Reports Server (NTRS)
Bahethi, O. P.
1982-01-01
An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).
Ozone Uncertainties Study Algorithm (OUSA)
NASA Astrophysics Data System (ADS)
Bahethi, O. P.
An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).
Solar Occultation Retrieval Algorithm Development
NASA Technical Reports Server (NTRS)
Lumpe, Jerry D.
2004-01-01
This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
NOSS Altimeter Detailed Algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Mcmillan, J. D.
1982-01-01
The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722
Variable Selection using MM Algorithms
Hunter, David R.; Li, Runze
2009-01-01
Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests. PMID:19458786
Research on Routing Selection Algorithm Based on Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna
The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.
Algorithm for Autonomous Landing
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2011-01-01
Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.
Berry, K.; Dayton, S.
1996-10-28
Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.
FORTRAN Algorithm for Image Processing
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hull, David R.
1987-01-01
FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.
Computer algorithm for coding gain
NASA Technical Reports Server (NTRS)
Dodd, E. E.
1974-01-01
Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.
Cascade Error Projection Learning Algorithm
NASA Technical Reports Server (NTRS)
Duong, T. A.; Stubberud, A. R.; Daud, T.
1995-01-01
A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.
The Chopthin Algorithm for Resampling
NASA Astrophysics Data System (ADS)
Gandy, Axel; Lau, F. Din-Houn
2016-08-01
Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.
CORDIC algorithms in four dimensions
NASA Astrophysics Data System (ADS)
Delosme, Jean-Marc; Hsiao, Shen-Fu
1990-11-01
CORDIC algorithms offer an attractive alternative to multiply-and-add based algorithms for the implementation of two-dimensional rotations preserving either norm: (x2 + 2) or (x2 _ y2)/2 Indeed these norms whose computation is a significant part of the evaluation of the two-dimensional rotations are computed much more easily by the CORDIC algorithms. However the part played by norm computations in the evaluation of rotations becomes quickly small as the dimension of the space increases. Thus in spaces of dimension 5 or more there is no practical alternative to multiply-and-add based algorithms. In the intermediate region dimensions 3 and 4 extensions of the CORDIC algorithms are an interesting option. The four-dimensional extensions are particularly elegant and are the main object of this paper.
Cubit Adaptive Meshing Algorithm Library
Energy Science and Technology Software Center (ESTSC)
2004-09-01
CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMALs triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandias patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
An Artificial Immune Univariate Marginal Distribution Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping
Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
The Dropout Learning Algorithm
Baldi, Pierre; Sadowski, Peter
2014-01-01
Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879
Wavelet periodicity detection algorithms
NASA Astrophysics Data System (ADS)
Benedetto, John J.; Pfander, Goetz E.
1998-10-01
This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Cluster algorithms and computational complexity
NASA Astrophysics Data System (ADS)
Li, Xuenan
Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.
Routing Algorithm Exploits Spatial Relations
NASA Technical Reports Server (NTRS)
Okino, Clayton; Jennings, Esther
2004-01-01
A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).
Linearization algorithms for line transfer
Scott, H.A.
1990-11-06
Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.
Fibonacci Numbers and Computer Algorithms.
ERIC Educational Resources Information Center
Atkins, John; Geist, Robert
1987-01-01
The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)
An onboard star identification algorithm
NASA Astrophysics Data System (ADS)
Ha, Kong; Femiano, Michael
The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.
Scheduling Jobs with Genetic Algorithms
NASA Astrophysics Data System (ADS)
Ferrolho, António; Crisóstomo, Manuel
Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.
Recursive Algorithm For Linear Regression
NASA Technical Reports Server (NTRS)
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Algorithmic complexity of a protein
NASA Astrophysics Data System (ADS)
Dewey, T. Gregory
1996-07-01
The information contained in a protein's amino acid sequence dictates its three-dimensional structure. To quantitate the transfer of information that occurs in the protein folding process, the Kolmogorov information entropy or algorithmic complexity of the protein structure is investigated. The algorithmic complexity of an object provides a means of quantitating its information content. Recent results have indicated that the algorithmic complexity of microstates of certain statistical mechanical systems can be estimated from the thermodynamic entropy. In the present work, it is shown that the algorithmic complexity of a protein is given by its configurational entropy. Using this result, a quantitative estimate of the information content of a protein's structure is made and is compared to the information content of the sequence. Additionally, the mutual information between sequence and structure is determined. It is seen that virtually all the information contained in the protein structure is shared with the sequence.
An onboard star identification algorithm
NASA Technical Reports Server (NTRS)
Ha, Kong; Femiano, Michael
1993-01-01
The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.
Cascade Error Projection: A New Learning Algorithm
NASA Technical Reports Server (NTRS)
Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.
1995-01-01
A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.
Belief network algorithms: A study of performance
Jitnah, N.
1996-12-31
This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.
Genetic algorithms as discovery programs
Hilliard, M.R.; Liepins, G.
1986-01-01
Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Fully relativistic lattice Boltzmann algorithm
Romatschke, P.; Mendoza, M.; Succi, S.
2011-09-15
Starting from the Maxwell-Juettner equilibrium distribution, we develop a relativistic lattice Boltzmann (LB) algorithm capable of handling ultrarelativistic systems with flat, but expanding, spacetimes. The algorithm is validated through simulations of a quark-gluon plasma, yielding excellent agreement with hydrodynamic simulations. The present scheme opens the possibility of transferring the recognized computational advantages of lattice kinetic theory to the context of both weakly and ultrarelativistic systems.
NASA Astrophysics Data System (ADS)
El-Guibaly, Fayez; Sabaa, A.
1996-10-01
In this paper, we introduce modifications on the classic CORDIC algorithm to reduce the number of iterations, and hence the rounding noise. The modified algorithm needs, at most, half the number of iterations to achieve the same accuracy as the classical one. The modifications are applicable to linear, circular and hyperbolic CORDIC in both vectoring and rotation modes. Simulations illustrate the effect of the new modifications.
Localization algorithm for acoustic emission
NASA Astrophysics Data System (ADS)
Salinas, V.; Vargas, Y.; Ruzzante, J.; Gaete, L.
2010-01-01
In this paper, an iterative algorithm for localization of acoustic emission (AE) source is presented. The main advantage of the system is that it is independent of the 'ability' in the determination of signal level to triggering the signal by the researcher. The system was tested in cylindrical samples with an AE localized in a known position; the precision in the source determination was of about 2 mm, better than the precision obtained with classic localization algorithms (˜1 cm).
CORDIC Algorithms: Theory And Extensions
NASA Astrophysics Data System (ADS)
Delosme, Jean-Marc
1989-11-01
Optimum algorithms for signal processing are notoriously costly to implement since they usually require intensive linear algebra operations to be performed at very high rates. In these cases a cost-effective solution is to design a pipelined or parallel architecture with special-purpose VLSI processors. One may often lower the hardware cost of such a dedicated architecture by using processors that implement CORDIC-like arithmetic algorithms. Indeed, with CORDIC algorithms, the evaluation and the application of an operation, such as determining a rotation that brings a vector onto another one and rotating other vectors by that amount, require the same time on identical processors and can be fully overlapped in most cases, thus leading to highly efficient implementations. We have shown earlier that a necessary condition for a CORDIC-type algorithm to exist is that the function to be implemented can be represented in terms of a matrix exponential. This paper refines this condition to the ability to represent , the desired function in terms of a rational representation of a matrix exponential. This insight gives us a powerful tool for the design of new CORDIC algorithms. This is demonstrated by rederiving classical CORDIC algorithms and introducing several new ones, for Jacobi rotations, three and higher dimensional rotations, etc.
Multithreaded Algorithms for Graph Coloring
Catalyurek, Umit V.; Feo, John T.; Gebremedhin, Assefaw H.; Halappanavar, Mahantesh; Pothen, Alex
2012-10-21
Graph algorithms are challenging to parallelize when high performance and scalability are primary goals. Low concurrency, poor data locality, irregular access pattern, and high data access to computation ratio are among the chief reasons for the challenge. The performance implication of these features is exasperated on distributed memory machines. More success is being achieved on shared-memory, multi-core architectures supporting multithreading. We consider a prototypical graph problem, coloring, and show how a greedy algorithm for solving it can be e*ectively parallelized on multithreaded architectures. We present in particular two di*erent parallel algorithms. The first relies on speculation and iteration, and is suitable for any shared-memory, multithreaded system. The second uses data ow principles and is targeted at the massively multithreaded Cray XMT system. We benchmark the algorithms on three di*erent platforms and demonstrate scalable runtime performance. In terms of quality of solution, both algorithms use nearly the same number of colors as the serial algorithm.
Myers, Timothy
2006-09-01
The use of protocols or care algorithms in medical facilities has increased in the managed care environment. The definition and application of care algorithms, with a particular focus on the treatment of acute bronchospasm, are explored in this review. The benefits and goals of using protocols, especially in the treatment of asthma, to standardize patient care based on clinical guidelines and evidence-based medicine are explained. Ideally, evidence-based protocols should translate research findings into best medical practices that would serve to better educate patients and their medical providers who are administering these protocols. Protocols should include evaluation components that can monitor, through some mechanism of quality assurance, the success and failure of the instrument so that modifications can be made as necessary. The development and design of an asthma care algorithm can be accomplished by using a four-phase approach: phase 1, identifying demographics, outcomes, and measurement tools; phase 2, reviewing, negotiating, and standardizing best practice; phase 3, testing and implementing the instrument and collecting data; and phase 4, analyzing the data and identifying areas of improvement and future research. The experiences of one medical institution that implemented an asthma care algorithm in the treatment of pediatric asthma are described. Their care algorithms served as tools for decision makers to provide optimal asthma treatment in children. In addition, the studies that used the asthma care algorithm to determine the efficacy and safety of ipratropium bromide and levalbuterol in children with asthma are described. PMID:16945065
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
Chang, C.Y.
1986-01-01
New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Improved autonomous star identification algorithm
NASA Astrophysics Data System (ADS)
Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong
2015-06-01
The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).
GPU Accelerated Event Detection Algorithm
Energy Science and Technology Software Center (ESTSC)
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less
SDR input power estimation algorithms
NASA Astrophysics Data System (ADS)
Briones, J. C.; Nappier, J. M.
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
Ensemble algorithms in reinforcement learning.
Wiering, Marco A; van Hasselt, Hado
2008-08-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380
Conflict-Aware Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester
2006-01-01
conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.
Fourier Lucas-Kanade algorithm.
Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha
2013-06-01
In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs). PMID:23599053
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
POSE Algorithms for Automated Docking
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.
2011-01-01
POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.
Benchmarking image fusion algorithm performance
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2012-06-01
Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.
Algorithms for automated DNA assembly
Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher
2010-01-01
Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162
Algorithms, complexity, and the sciences.
Papadimitriou, Christos
2014-11-11
Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
Projection Classification Based Iterative Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Ruiqiu; Li, Chen; Gao, Wenhua
2015-05-01
Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.
Firefly Algorithm for Structural Search.
Avendaño-Franco, Guillermo; Romero, Aldo H
2016-07-12
The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers. PMID:27232694
Some nonlinear space decomposition algorithms
Tai, Xue-Cheng; Espedal, M.
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Seamless Merging of Hypertext and Algorithm Animation
ERIC Educational Resources Information Center
Karavirta, Ville
2009-01-01
Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…
HEATR project: ATR algorithm parallelization
NASA Astrophysics Data System (ADS)
Deardorf, Catherine E.
1998-09-01
High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.
Decryption of pure-position permutation algorithms.
Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang
2004-07-01
Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown. PMID:15495308
Old And New Algorithms For Toeplitz Systems
NASA Astrophysics Data System (ADS)
Brent, Richard P.
1988-02-01
Toeplitz linear systems and Toeplitz least squares problems commonly arise in digital signal processing. In this paper we survey some old, "well known" algorithms and some recent algorithms for solving these problems. We concentrate our attention on algorithms which can be implemented efficiently on a variety of parallel machines (including pipelined vector processors and systolic arrays). We distinguish between algorithms which require inner products, and algorithms which avoid inner products, and thus are better suited to parallel implementation on some parallel architectures. Finally, we mention some "asymptotically fast" 0(n(log n)2) algorithms and compare them with 0(n2) algorithms.
A generalized memory test algorithm
NASA Technical Reports Server (NTRS)
Milner, E. J.
1982-01-01
A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.
Squint mode SAR processing algorithms
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Jin, M.; Curlander, J. C.
1989-01-01
The unique characteristics of a spaceborne SAR (synthetic aperture radar) operating in a squint mode include large range walk and large variation in the Doppler centroid as a function of range. A pointing control technique to reduce the Doppler drift and a new processing algorithm to accommodate large range walk are presented. Simulations of the new algorithm for squint angles up to 20 deg and look angles up to 44 deg for the Earth Observing System (Eos) L-band SAR configuration demonstrate that it is capable of maintaining the resolution broadening within 20 percent and the ISLR within a fraction of a decibel of the theoretical value.
Fast algorithms for transport models
Manteuffel, T.A.
1992-12-01
The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).
ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.
Claire, Robert W.
1984-01-01
An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.
Born approximation, scattering, and algorithm
NASA Astrophysics Data System (ADS)
Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun
2015-05-01
In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Two Algorithms for Processing Electronic Nose Data
NASA Technical Reports Server (NTRS)
Young, Rebecca; Linnell, Bruce
2007-01-01
Two algorithms for processing the digitized readings of electronic noses, and computer programs to implement the algorithms, have been devised in a continuing effort to increase the utility of electronic noses as means of identifying airborne compounds and measuring their concentrations. One algorithm identifies the two vapors in a two-vapor mixture and estimates the concentration of each vapor (in principle, this algorithm could be extended to more than two vapors). The other algorithm identifies a single vapor and estimates its concentration.
Blind Alley Aware ACO Routing Algorithm
NASA Astrophysics Data System (ADS)
Yoshikawa, Masaya; Otani, Kazuo
2010-10-01
The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.
Parallel algorithms for unconstrained optimizations by multisplitting
He, Qing
1994-12-31
In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.