Sample records for k-space computational method

  1. A k-space method for large-scale models of wave propagation in tissue.

    PubMed

    Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C

    2001-03-01

    Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.

  2. A k-Space Method for Moderately Nonlinear Wave Propagation

    PubMed Central

    Jing, Yun; Wang, Tianren; Clement, Greg T.

    2013-01-01

    A k-space method for moderately nonlinear wave propagation in absorptive media is presented. The Westervelt equation is first transferred into k-space via Fourier transformation, and is solved by a modified wave-vector time-domain scheme. The present approach is not limited to forward propagation or parabolic approximation. One- and two-dimensional problems are investigated to verify the method by comparing results to analytic solutions and finite-difference time-domain (FDTD) method. It is found that to obtain accurate results in homogeneous media, the grid size can be as little as two points per wavelength, and for a moderately nonlinear problem, the Courant–Friedrichs–Lewy number can be as large as 0.4. Through comparisons with the conventional FDTD method, the k-space method for nonlinear wave propagation is shown here to be computationally more efficient and accurate. The k-space method is then employed to study three-dimensional nonlinear wave propagation through the skull, which shows that a relatively accurate focusing can be achieved in the brain at a high frequency by sending a low frequency from the transducer. Finally, implementations of the k-space method using a single graphics processing unit shows that it required about one-seventh the computation time of a single-core CPU calculation. PMID:22899114

  3. Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging

    PubMed Central

    Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.

    2014-01-01

    Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602

  4. Time-reversal transcranial ultrasound beam focusing using a k-space method

    PubMed Central

    Jing, Yun; Meral, F. Can; Clement, Greg. T.

    2012-01-01

    This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477

  5. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  6. Analytical Bistatic k Space Images Compared to Experimental Swept Frequency EAR Images

    NASA Technical Reports Server (NTRS)

    Shaeffer, John; Cooper, Brett; Hom, Kam

    2004-01-01

    A case study of flat plate scattering images obtained by the analytical bistatic k space and experimental swept frequency ISAR methods is presented. The key advantage of the bistatic k space image is that a single excitation is required, i.e., one frequency I one angle. This means that prediction approaches such as MOM only need to compute one solution at a single frequency. Bistatic image Fourier transform data are obtained by computing the scattered field at various bistatic positions about the body in k space. Experimental image Fourier transform data are obtained from the measured response to a bandwidth of frequencies over a target rotation range.

  7. Utilization of the k-space Computational Method to Design an Intracavitary Transrectal Ultrasound Phased Array Applicator for Hyperthermia Treatment of Prostate Cancer

    NASA Astrophysics Data System (ADS)

    Al-Bataineh, Osama M.; Collins, Christopher M.; Sparrow, Victor W.; Keolian, Robert M.; Smith, Nadine Barrie

    2006-05-01

    This research utilizes the k-space computational method to design an intracavitary probe for hyperthermia treatment of prostate cancer. A three-dimensional (3D) photographical prostate model, utilizing imaging data from the Visible Human Project®, was the basis for inhomogeneous acoustical model development. The acoustical model accounted for sound speed, density, and absorption variations. The k-space computational method was used to simulate ultrasound wave propagation of the designed phased array through the acoustical model. To insure the uniformity and spread of the pressure in the length of the array, and the steering and focusing capability in the width of the array, the equal-sized elements of the phased array were 1 × 14 mm. The anatomical measurements of the prostate were used to predict the final phased array specifications (4 × 20 planar array, 1.2 MHz, element size = 1 × 14 mm, array size = 56 × 20 mm). Good agreement between the exposimetry and the k-space results was achieved. As an example, the -3 dB distances of the focal volume were differing by 9.1% in the propagation direction for k-space prostate simulation and exposimetry results. Temperature simulations indicated that the rectal wall temperature was elevated less than 2°C during hyperthermia treatment. Steering and focusing ability of the designed probe, in both azimuth and propagation directions, were found to span the entire prostate volume with minimal grating lobes (-10 dB reduction from the main lobe) and least heat damage to the rectal wall. Evaluations of the probe included ex vivo and in vivo controlled experiments to deliver the required thermal dose to the targeted tissue. With a desired temperature plateau of 43.0°C, the MRI temperature results at the steady state were 42.9 ± 0.38°C and 43.1 ± 0.80°C for ex vivo and in vivo experiments, respectively. Unlike conventional computational methods, the k-space method provides a powerful tool to predict pressure wavefield and temperature rise in sophisticated, large scale, 3D, inhomogeneous and coarse grid models.

  8. MR thermometry characterization of a hyperthermia ultrasound array designed using the k-space computational method

    PubMed Central

    Al-Bataineh, Osama M; Collins, Christopher M; Park, Eun-Joo; Lee, Hotaik; Smith, Nadine Barrie

    2006-01-01

    Background Ultrasound induced hyperthermia is a useful adjuvant to radiation therapy in the treatment of prostate cancer. A uniform thermal dose (43°C for 30 minutes) is required within the targeted cancerous volume for effective therapy. This requires specific ultrasound phased array design and appropriate thermometry method. Inhomogeneous, acoustical, three-dimensional (3D) prostate models and economical computational methods provide necessary tools to predict the appropriate shape of hyperthermia phased arrays for better focusing. This research utilizes the k-space computational method and a 3D human prostate model to design an intracavitary ultrasound probe for hyperthermia treatment of prostate cancer. Evaluation of the probe includes ex vivo and in vivo controlled hyperthermia experiments using the noninvasive magnetic resonance imaging (MRI) thermometry. Methods A 3D acoustical prostate model was created using photographic data from the Visible Human Project®. The k-space computational method was used on this coarse grid and inhomogeneous tissue model to simulate the steady state pressure wavefield of the designed phased array using the linear acoustic wave equation. To ensure the uniformity and spread of the pressure in the length of the array, and the focusing capability in the width of the array, the equally-sized elements of the 4 × 20 elements phased array were 1 × 14 mm. A probe was constructed according to the design in simulation using lead zerconate titanate (PZT-8) ceramic and a Delrin® plastic housing. Noninvasive MRI thermometry and a switching feedback controller were used to accomplish ex vivo and in vivo hyperthermia evaluations of the probe. Results Both exposimetry and k-space simulation results demonstrated acceptable agreement within 9%. With a desired temperature plateau of 43.0°C, ex vivo and in vivo controlled hyperthermia experiments showed that the MRI temperature at the steady state was 42.9 ± 0.38°C and 43.1 ± 0.80°C, respectively, for 20 minutes of heating. Conclusion Unlike conventional computational methods, the k-space method provides a powerful tool to predict pressure wavefield in large scale, 3D, inhomogeneous and coarse grid tissue models. Noninvasive MRI thermometry supports the efficacy of this probe and the feedback controller in an in vivo hyperthermia treatment of canine prostate. PMID:17064421

  9. Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T

    2012-06-01

    The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.

  10. MR thermometry characterization of a hyperthermia ultrasound array designed using the k-space computational method.

    PubMed

    Al-Bataineh, Osama M; Collins, Christopher M; Park, Eun-Joo; Lee, Hotaik; Smith, Nadine Barrie

    2006-10-25

    Ultrasound induced hyperthermia is a useful adjuvant to radiation therapy in the treatment of prostate cancer. A uniform thermal dose (43 degrees C for 30 minutes) is required within the targeted cancerous volume for effective therapy. This requires specific ultrasound phased array design and appropriate thermometry method. Inhomogeneous, acoustical, three-dimensional (3D) prostate models and economical computational methods provide necessary tools to predict the appropriate shape of hyperthermia phased arrays for better focusing. This research utilizes the k-space computational method and a 3D human prostate model to design an intracavitary ultrasound probe for hyperthermia treatment of prostate cancer. Evaluation of the probe includes ex vivo and in vivo controlled hyperthermia experiments using the noninvasive magnetic resonance imaging (MRI) thermometry. A 3D acoustical prostate model was created using photographic data from the Visible Human Project. The k-space computational method was used on this coarse grid and inhomogeneous tissue model to simulate the steady state pressure wavefield of the designed phased array using the linear acoustic wave equation. To ensure the uniformity and spread of the pressure in the length of the array, and the focusing capability in the width of the array, the equally-sized elements of the 4 x 20 elements phased array were 1 x 14 mm. A probe was constructed according to the design in simulation using lead zerconate titanate (PZT-8) ceramic and a Delrin plastic housing. Noninvasive MRI thermometry and a switching feedback controller were used to accomplish ex vivo and in vivo hyperthermia evaluations of the probe. Both exposimetry and k-space simulation results demonstrated acceptable agreement within 9%. With a desired temperature plateau of 43.0 degrees C, ex vivo and in vivo controlled hyperthermia experiments showed that the MRI temperature at the steady state was 42.9 +/- 0.38 degrees C and 43.1 +/- 0.80 degrees C, respectively, for 20 minutes of heating. Unlike conventional computational methods, the k-space method provides a powerful tool to predict pressure wavefield in large scale, 3D, inhomogeneous and coarse grid tissue models. Noninvasive MRI thermometry supports the efficacy of this probe and the feedback controller in an in vivo hyperthermia treatment of canine prostate.

  11. FSH: fast spaced seed hashing exploiting adjacent hashes.

    PubMed

    Girotto, Samuele; Comin, Matteo; Pizzi, Cinzia

    2018-01-01

    Patterns with wildcards in specified positions, namely spaced seeds , are increasingly used instead of k -mers in many bioinformatics applications that require indexing, querying and rapid similarity search, as they can provide better sensitivity. Many of these applications require to compute the hashing of each position in the input sequences with respect to the given spaced seed, or to multiple spaced seeds. While the hashing of k -mers can be rapidly computed by exploiting the large overlap between consecutive k -mers, spaced seeds hashing is usually computed from scratch for each position in the input sequence, thus resulting in slower processing. The method proposed in this paper, fast spaced-seed hashing (FSH), exploits the similarity of the hash values of spaced seeds computed at adjacent positions in the input sequence. In our experiments we compute the hash for each positions of metagenomics reads from several datasets, with respect to different spaced seeds. We also propose a generalized version of the algorithm for the simultaneous computation of multiple spaced seeds hashing. In the experiments, our algorithm can compute the hashing values of spaced seeds with a speedup, with respect to the traditional approach, between 1.6[Formula: see text] to 5.3[Formula: see text], depending on the structure of the spaced seed. Spaced seed hashing is a routine task for several bioinformatics application. FSH allows to perform this task efficiently and raise the question of whether other hashing can be exploited to further improve the speed up. This has the potential of major impact in the field, making spaced seed applications not only accurate, but also faster and more efficient. The software FSH is freely available for academic use at: https://bitbucket.org/samu661/fsh/overview.

  12. Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.

    PubMed

    Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter

    2017-09-01

    An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Analysis and optimization of cyclic methods in orbit computation

    NASA Technical Reports Server (NTRS)

    Pierce, S.

    1973-01-01

    The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

  14. Hierarchical Kohonenen net for anomaly detection in network security.

    PubMed

    Sarasamma, Suseela T; Zhu, Qiuming A; Huff, Julie

    2005-04-01

    A novel multilevel hierarchical Kohonen Net (K-Map) for an intrusion detection system is presented. Each level of the hierarchical map is modeled as a simple winner-take-all K-Map. One significant advantage of this multilevel hierarchical K-Map is its computational efficiency. Unlike other statistical anomaly detection methods such as nearest neighbor approach, K-means clustering or probabilistic analysis that employ distance computation in the feature space to identify the outliers, our approach does not involve costly point-to-point computation in organizing the data into clusters. Another advantage is the reduced network size. We use the classification capability of the K-Map on selected dimensions of data set in detecting anomalies. Randomly selected subsets that contain both attacks and normal records from the KDD Cup 1999 benchmark data are used to train the hierarchical net. We use a confidence measure to label the clusters. Then we use the test set from the same KDD Cup 1999 benchmark to test the hierarchical net. We show that a hierarchical K-Map in which each layer operates on a small subset of the feature space is superior to a single-layer K-Map operating on the whole feature space in detecting a variety of attacks in terms of detection rate as well as false positive rate.

  15. An efficient sampling algorithm for uncertain abnormal data detection in biomedical image processing and disease prediction.

    PubMed

    Liu, Fei; Zhang, Xi; Jia, Yan

    2015-01-01

    In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.

  16. Computational studies of Ras and PI3K

    NASA Technical Reports Server (NTRS)

    Ren, Lei; Cucinotta, Francis A.

    2004-01-01

    Until recently, experimental techniques in molecular cell biology have been the primary means to investigate biological risk upon space radiation. However, computational modeling provides an alternative theoretical approach, which utilizes various computational tools to simulate proteins, nucleotides, and their interactions. In this study, we are focused on using molecular mechanics (MM) and molecular dynamics (MD) to study the mechanism of protein-protein binding and to estimate the binding free energy between proteins. Ras is a key element in a variety of cell processes, and its activation of phosphoinositide 3-kinase (PI3K) is important for survival of transformed cells. Different computational approaches for this particular study are presented to calculate the solvation energies and binding free energies of H-Ras and PI3K. The goal of this study is to establish computational methods to investigate the roles of different proteins played in the cellular responses to space radiation, including modification of protein function through gene mutation, and to support the studies in molecular cell biology and theoretical kinetics models for our risk assessment project.

  17. Two modeling strategies for empirical Bayes estimation

    PubMed Central

    Efron, Bradley

    2014-01-01

    Empirical Bayes methods use the data from parallel experiments, for instance observations Xk ~ 𝒩 (Θk, 1) for k = 1, 2, …, N, to estimate the conditional distributions Θk|Xk. There are two main estimation strategies: modeling on the θ space, called “g-modeling” here, and modeling on the×space, called “f-modeling.” The two approaches are de- scribed and compared. A series of computational formulas are developed to assess their frequentist accuracy. Several examples, both contrived and genuine, show the strengths and limitations of the two strategies. PMID:25324592

  18. Image Reconstruction from Highly Undersampled (k, t)-Space Data with Joint Partial Separability and Sparsity Constraints

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Christodoulou, Anthony G.; Liang, Zhi-Pei

    2012-01-01

    Partial separability (PS) and sparsity have been previously used to enable reconstruction of dynamic images from undersampled (k, t)-space data. This paper presents a new method to use PS and sparsity constraints jointly for enhanced performance in this context. The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually. A globally convergent computational algorithm is described to efficiently solve the underlying optimization problem. Reconstruction results from simulated and in vivo cardiac MRI data are also shown to illustrate the performance of the proposed method. PMID:22695345

  19. Comparison of Spatiotemporal Mapping Techniques for Enormous Etl and Exploitation Patterns

    NASA Astrophysics Data System (ADS)

    Deiotte, R.; La Valley, R.

    2017-10-01

    The need to extract, transform, and exploit enormous volumes of spatiotemporal data has exploded with the rise of social media, advanced military sensors, wearables, automotive tracking, etc. However, current methods of spatiotemporal encoding and exploitation simultaneously limit the use of that information and increase computing complexity. Current spatiotemporal encoding methods from Niemeyer and Usher rely on a Z-order space filling curve, a relative of Peano's 1890 space filling curve, for spatial hashing and interleaving temporal hashes to generate a spatiotemporal encoding. However, there exist other space-filling curves, and that provide different manifold coverings that could promote better hashing techniques for spatial data and have the potential to map spatiotemporal data without interleaving. The concatenation of Niemeyer's and Usher's techniques provide a highly efficient space-time index. However, other methods have advantages and disadvantages regarding computational cost, efficiency, and utility. This paper explores the several methods using a range of sizes of data sets from 1K to 10M observations and provides a comparison of the methods.

  20. Radiofrequency pulse design using nonlinear gradient magnetic fields.

    PubMed

    Kopanoglu, Emre; Constable, R Todd

    2015-09-01

    An iterative k-space trajectory and radiofrequency (RF) pulse design method is proposed for excitation using nonlinear gradient magnetic fields. The spatial encoding functions (SEFs) generated by nonlinear gradient fields are linearly dependent in Cartesian coordinates. Left uncorrected, this may lead to flip angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a matching pursuit algorithm, and the RF pulse is designed using a conjugate gradient algorithm. Three variants of the proposed approach are given: the full algorithm, a computationally cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. The method is compared with other iterative (matching pursuit and conjugate gradient) and noniterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity. An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. © 2014 Wiley Periodicals, Inc.

  1. Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow.

  2. Crossing symmetry in alpha space

    NASA Astrophysics Data System (ADS)

    Hogervorst, Matthijs; van Rees, Balt C.

    2017-11-01

    We initiate the study of the conformal bootstrap using Sturm-Liouville theory, specializing to four-point functions in one-dimensional CFTs. We do so by decomposing conformal correlators using a basis of eigenfunctions of the Casimir which are labeled by a complex number α. This leads to a systematic method for computing conformal block decompositions. Analyzing bootstrap equations in alpha space turns crossing symmetry into an eigenvalue problem for an integral operator K. The operator K is closely related to the Wilson transform, and some of its eigenfunctions can be found in closed form.

  3. KINETIC-J: A computational kernel for solving the linearized Vlasov equation applied to calculations of the kinetic, configuration space plasma current for time harmonic wave electric fields

    NASA Astrophysics Data System (ADS)

    Green, David L.; Berry, Lee A.; Simpson, Adam B.; Younkin, Timothy R.

    2018-04-01

    We present the KINETIC-J code, a computational kernel for evaluating the linearized Vlasov equation with application to calculating the kinetic plasma response (current) to an applied time harmonic wave electric field. This code addresses the need for a configuration space evaluation of the plasma current to enable kinetic full-wave solvers for waves in hot plasmas to move beyond the limitations of the traditional Fourier spectral methods. We benchmark the kernel via comparison with the standard k →-space forms of the hot plasma conductivity tensor.

  4. The k-space origins of scattering in Bi2Sr2CaCu2O8+x

    NASA Astrophysics Data System (ADS)

    Alldredge, Jacob W.; Calleja, Eduardo M.; Dai, Jixia; Eisaki, H.; Uchida, S.; McElroy, Kyle

    2013-08-01

    We demonstrate a general, computer automated procedure that inverts the reciprocal space scattering data (q-space) that are measured by spectroscopic imaging scanning tunnelling microscopy (SI-STM) in order to determine the momentum space (k-space) scattering structure. This allows a detailed examination of the k-space origins of the quasiparticle interference (QPI) pattern in Bi2Sr2CaCu2O8+x within the theoretical constraints of the joint density of states (JDOS). Our new method allows measurement of the differences between the positive and negative energy dispersions, the gap structure and an energy dependent scattering length scale. Furthermore, it resolves the transition between the dispersive QPI and the checkerboard ({q}_{1}^{\\ast } excitation). We have measured the k-space scattering structure over a wide range of doping (p ˜ 0.22-0.08), including regions where the octet model is not applicable. Our technique allows the complete mapping of the k-space scattering origins of the spatial excitations in Bi2Sr2CaCu2O8+x, which allows for better comparisons between SI-STM and other experimental probes of the band structure. By applying our new technique to such a heavily studied compound, we can validate our new general approach for determining the k-space scattering origins from SI-STM data.

  5. The k-space origins of scattering in Bi2Sr2CaCu2O8+x.

    PubMed

    Alldredge, Jacob W; Calleja, Eduardo M; Dai, Jixia; Eisaki, H; Uchida, S; McElroy, Kyle

    2013-08-21

    We demonstrate a general, computer automated procedure that inverts the reciprocal space scattering data (q-space) that are measured by spectroscopic imaging scanning tunnelling microscopy (SI-STM) in order to determine the momentum space (k-space) scattering structure. This allows a detailed examination of the k-space origins of the quasiparticle interference (QPI) pattern in Bi2Sr2CaCu2O8+x within the theoretical constraints of the joint density of states (JDOS). Our new method allows measurement of the differences between the positive and negative energy dispersions, the gap structure and an energy dependent scattering length scale. Furthermore, it resolves the transition between the dispersive QPI and the checkerboard ([Formula: see text] excitation). We have measured the k-space scattering structure over a wide range of doping (p ∼ 0.22-0.08), including regions where the octet model is not applicable. Our technique allows the complete mapping of the k-space scattering origins of the spatial excitations in Bi2Sr2CaCu2O8+x, which allows for better comparisons between SI-STM and other experimental probes of the band structure. By applying our new technique to such a heavily studied compound, we can validate our new general approach for determining the k-space scattering origins from SI-STM data.

  6. Geometric shapes inversion method of space targets by ISAR image segmentation

    NASA Astrophysics Data System (ADS)

    Huo, Chao-ying; Xing, Xiao-yu; Yin, Hong-cheng; Li, Chen-guang; Zeng, Xiang-yun; Xu, Gao-gui

    2017-11-01

    The geometric shape of target is an effective characteristic in the process of space targets recognition. This paper proposed a method of shape inversion of space target based on components segmentation from ISAR image. The Radon transformation, Hough transformation, K-means clustering, triangulation will be introduced into ISAR image processing. Firstly, we use Radon transformation and edge detection to extract space target's main body spindle and solar panel spindle from ISAR image. Then the targets' main body, solar panel, rectangular and circular antenna are segmented from ISAR image based on image detection theory. Finally, the sizes of every structural component are computed. The effectiveness of this method is verified using typical targets' simulation data.

  7. RF Pulse Design using Nonlinear Gradient Magnetic Fields

    PubMed Central

    Kopanoglu, Emre; Constable, R. Todd

    2014-01-01

    Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286

  8. Nontraditional method for determining unperturbed orbits of unknown space objects using incomplete optical observational data

    NASA Astrophysics Data System (ADS)

    Perov, N. I.

    1985-02-01

    A physical-geometrical method for computing the orbits of earth satellites on the basis of an inadequate number of angular observations (N3) was developed. Specifically, a new method has been developed for calculating the elements of Keplerian orbits of unidentified artificial satellites using two angular observations (alpha sub k, S sub k, k = 1). The first section gives procedures for determining the topocentric distance to AES on the basis of one optical observation. This is followed by description of a very simple method for determining unperturbed orbits using two satellite position vectors and a time interval which is applicable even in the case of antiparallel AED position vectors, a method designated the R sub 2 iterations method.

  9. Automated mapping of the ocean floor using the theory of intrinsic random functions of order k

    USGS Publications Warehouse

    David, M.; Crozel, D.; Robb, James M.

    1986-01-01

    High-quality contour maps can be computer drawn from single track echo-sounding data by combining Universal Kriging and the theory of intrinsic random function of order K (IRFK). These methods interpolate values among the closely spaced points that lie along relatively widely spaced lines. The technique provides a variance which can be contoured as a quantitative measure of map precision. The technique can be used to evaluate alternative survey trackline configurations and data collection intervals, and can be applied to other types of oceanographic data. ?? 1986 D. Reidel Publishing Company.

  10. Advances in locally constrained k-space-based parallel MRI.

    PubMed

    Samsonov, Alexey A; Block, Walter F; Arunachalam, Arjun; Field, Aaron S

    2006-02-01

    In this article, several theoretical and methodological developments regarding k-space-based, locally constrained parallel MRI (pMRI) reconstruction are presented. A connection between Parallel MRI with Adaptive Radius in k-Space (PARS) and GRAPPA methods is demonstrated. The analysis provides a basis for unified treatment of both methods. Additionally, a weighted PARS reconstruction is proposed, which may absorb different weighting strategies for improved image reconstruction. Next, a fast and efficient method for pMRI reconstruction of data sampled on non-Cartesian trajectories is described. In the new technique, the computational burden associated with the numerous matrix inversions in the original PARS method is drastically reduced by limiting direct calculation of reconstruction coefficients to only a few reference points. The rest of the coefficients are found by interpolating between the reference sets, which is possible due to the similar configuration of points participating in reconstruction for highly symmetric trajectories, such as radial and spirals. As a result, the time requirements are drastically reduced, which makes it practical to use pMRI with non-Cartesian trajectories in many applications. The new technique was demonstrated with simulated and actual data sampled on radial trajectories. Copyright 2006 Wiley-Liss, Inc.

  11. Deep learning with domain adaptation for accelerated projection-reconstruction MR.

    PubMed

    Han, Yoseob; Yoo, Jaejun; Kim, Hak Hee; Shin, Hee Jung; Sung, Kyunghyun; Ye, Jong Chul

    2018-09-01

    The radial k-space trajectory is a well-established sampling trajectory used in conjunction with magnetic resonance imaging. However, the radial k-space trajectory requires a large number of radial lines for high-resolution reconstruction. Increasing the number of radial lines causes longer acquisition time, making it more difficult for routine clinical use. On the other hand, if we reduce the number of radial lines, streaking artifact patterns are unavoidable. To solve this problem, we propose a novel deep learning approach with domain adaptation to restore high-resolution MR images from under-sampled k-space data. The proposed deep network removes the streaking artifacts from the artifact corrupted images. To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of X-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets. The proposed method outperforms existing compressed sensing algorithms, such as the total variation and PR-FOCUSS methods. In addition, the calculation time is several orders of magnitude faster than the total variation and PR-FOCUSS methods. Moreover, we found that pre-training using CT or MR data from similar organ data is more important than pre-training using data from the same modality for different organ. We demonstrate the possibility of a domain-adaptation when only a limited amount of MR data is available. The proposed method surpasses the existing compressed sensing algorithms in terms of the image quality and computation time. © 2018 International Society for Magnetic Resonance in Medicine.

  12. Fast implementation for compressive recovery of highly accelerated cardiac cine MRI using the balanced sparse model.

    PubMed

    Ting, Samuel T; Ahmad, Rizwan; Jin, Ning; Craft, Jason; Serafim da Silveira, Juliana; Xue, Hui; Simonetti, Orlando P

    2017-04-01

    Sparsity-promoting regularizers can enable stable recovery of highly undersampled magnetic resonance imaging (MRI), promising to improve the clinical utility of challenging applications. However, lengthy computation time limits the clinical use of these methods, especially for dynamic MRI with its large corpus of spatiotemporal data. Here, we present a holistic framework that utilizes the balanced sparse model for compressive sensing and parallel computing to reduce the computation time of cardiac MRI recovery methods. We propose a fast, iterative soft-thresholding method to solve the resulting ℓ1-regularized least squares problem. In addition, our approach utilizes a parallel computing environment that is fully integrated with the MRI acquisition software. The methodology is applied to two formulations of the multichannel MRI problem: image-based recovery and k-space-based recovery. Using measured MRI data, we show that, for a 224 × 144 image series with 48 frames, the proposed k-space-based approach achieves a mean reconstruction time of 2.35 min, a 24-fold improvement compared a reconstruction time of 55.5 min for the nonlinear conjugate gradient method, and the proposed image-based approach achieves a mean reconstruction time of 13.8 s. Our approach can be utilized to achieve fast reconstruction of large MRI datasets, thereby increasing the clinical utility of reconstruction techniques based on compressed sensing. Magn Reson Med 77:1505-1515, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  13. A singular K-space model for fast reconstruction of magnetic resonance images from undersampled data.

    PubMed

    Luo, Jianhua; Mou, Zhiying; Qin, Binjie; Li, Wanqing; Ogunbona, Philip; Robini, Marc C; Zhu, Yuemin

    2018-07-01

    Reconstructing magnetic resonance images from undersampled k-space data is a challenging problem. This paper introduces a novel method of image reconstruction from undersampled k-space data based on the concept of singularizing operators and a novel singular k-space model. Exploring the sparsity of an image in the k-space, the singular k-space model (SKM) is proposed in terms of the k-space functions of a singularizing operator. The singularizing operator is constructed by combining basic difference operators. An algorithm is developed to reliably estimate the model parameters from undersampled k-space data. The estimated parameters are then used to recover the missing k-space data through the model, subsequently achieving high-quality reconstruction of the image using inverse Fourier transform. Experiments on physical phantom and real brain MR images have shown that the proposed SKM method constantly outperforms the popular total variation (TV) and the classical zero-filling (ZF) methods regardless of the undersampling rates, the noise levels, and the image structures. For the same objective quality of the reconstructed images, the proposed method requires much less k-space data than the TV method. The SKM method is an effective method for fast MRI reconstruction from the undersampled k-space data. Graphical abstract Two Real Images and their sparsified images by singularizing operator.

  14. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †

    PubMed Central

    Murdani, Muhammad Harist; Hong, Bonghee

    2018-01-01

    In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366

  15. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.

    PubMed

    Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee

    2018-03-24

    In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.

  16. Bennett's acceptance ratio and histogram analysis methods enhanced by umbrella sampling along a reaction coordinate in configurational space.

    PubMed

    Kim, Ilsoo; Allen, Toby W

    2012-04-28

    Free energy perturbation, a method for computing the free energy difference between two states, is often combined with non-Boltzmann biased sampling techniques in order to accelerate the convergence of free energy calculations. Here we present a new extension of the Bennett acceptance ratio (BAR) method by combining it with umbrella sampling (US) along a reaction coordinate in configurational space. In this approach, which we call Bennett acceptance ratio with umbrella sampling (BAR-US), the conditional histogram of energy difference (a mapping of the 3N-dimensional configurational space via a reaction coordinate onto 1D energy difference space) is weighted for marginalization with the associated population density along a reaction coordinate computed by US. This procedure produces marginal histograms of energy difference, from forward and backward simulations, with higher overlap in energy difference space, rendering free energy difference estimations using BAR statistically more reliable. In addition to BAR-US, two histogram analysis methods, termed Bennett overlapping histograms with US (BOH-US) and Bennett-Hummer (linear) least square with US (BHLS-US), are employed as consistency and convergence checks for free energy difference estimation by BAR-US. The proposed methods (BAR-US, BOH-US, and BHLS-US) are applied to a 1-dimensional asymmetric model potential, as has been used previously to test free energy calculations from non-equilibrium processes. We then consider the more stringent test of a 1-dimensional strongly (but linearly) shifted harmonic oscillator, which exhibits no overlap between two states when sampled using unbiased Brownian dynamics. We find that the efficiency of the proposed methods is enhanced over the original Bennett's methods (BAR, BOH, and BHLS) through fast uniform sampling of energy difference space via US in configurational space. We apply the proposed methods to the calculation of the electrostatic contribution to the absolute solvation free energy (excess chemical potential) of water. We then address the controversial issue of ion selectivity in the K(+) ion channel, KcsA. We have calculated the relative binding affinity of K(+) over Na(+) within a binding site of the KcsA channel for which different, though adjacent, K(+) and Na(+) configurations exist, ideally suited to these US-enhanced methods. Our studies demonstrate that the significant improvements in free energy calculations obtained using the proposed methods can have serious consequences for elucidating biological mechanisms and for the interpretation of experimental data.

  17. Computer study of emergency shutdowns of a 60-kilowatt reactor Brayton space power system

    NASA Technical Reports Server (NTRS)

    Tew, R. C.; Jefferies, K. S.

    1974-01-01

    A digital computer study of emergency shutdowns of a 60-kWe reactor Brayton power system was conducted. Malfunctions considered were (1) loss of reactor coolant flow, (2) loss of Brayton system gas flow, (3)turbine overspeed, and (4) a reactivity insertion error. Loss of reactor coolant flow was the most serious malfunction for the reactor. Methods for moderating the reactor transients due to this malfunction are considered.

  18. Does Prop-2-ynylideneamine, HC≡CCH=NH, Exist in Space? A Theoretical and Computational Investigation

    PubMed Central

    Osman, Osman I.; Elroby, Shaaban A.; Aziz, Saadullah G.; Hilal, Rifaat H.

    2014-01-01

    MP2, DFT and CCSD methods with 6-311++G** and aug-cc-pvdz basis sets have been used to probe the structural changes and relative energies of E-prop-2-ynylideneamine (I), Z-prop-2-ynylideneamine (II), prop-1,2-diene-1-imine (III) and vinyl cyanide (IV). The energy near-equivalence and provenance of preference of isomers and tautomers were investigated by NBO calculations using HF and B3LYP methods with 6-311++G** and aug-cc-pvdz basis sets. All substrates have Cs symmetry. The optimized geometries were found to be mainly theoretical method dependent. All elected levels of theory have computed I/II total energy of isomerization (ΔE) of 1.707 to 3.707 kJ/mol in favour of II at 298.15 K. MP2 and CCSD methods have indicated clearly the preference of II over III; while the B3LYP functional predicted nearly similar total energies. All tested levels of theory yielded a global II/IV tautomerization total energy (ΔE) of 137.3–148.4 kJ/mol in support of IV at 298.15 K. The negative values of ΔS indicated that IV is favoured at low temperature. At high temperature, a reverse tautomerization becomes spontaneous and II is preferred. The existence of II in space was debated through the interpretation and analysis of the thermodynamic and kinetic studies of this tautomerization reaction and the presence of similar compounds in the Interstellar Medium (ISM). PMID:24950178

  19. Modeling open nanophotonic systems using the Fourier modal method: generalization to 3D Cartesian coordinates.

    PubMed

    Häyrynen, Teppo; Osterkryger, Andreas Dyhl; de Lasson, Jakob Rosenkrantz; Gregersen, Niels

    2017-09-01

    Recently, an open geometry Fourier modal method based on a new combination of an open boundary condition and a non-uniform k-space discretization was introduced for rotationally symmetric structures, providing a more efficient approach for modeling nanowires and micropillar cavities [J. Opt. Soc. Am. A33, 1298 (2016)JOAOD61084-752910.1364/JOSAA.33.001298]. Here, we generalize the approach to three-dimensional (3D) Cartesian coordinates, allowing for the modeling of rectangular geometries in open space. The open boundary condition is a consequence of having an infinite computational domain described using basis functions that expand the whole space. The strength of the method lies in discretizing the Fourier integrals using a non-uniform circular "dartboard" sampling of the Fourier k space. We show that our sampling technique leads to a more accurate description of the continuum of the radiation modes that leak out from the structure. We also compare our approach to conventional discretization with direct and inverse factorization rules commonly used in established Fourier modal methods. We apply our method to a variety of optical waveguide structures and demonstrate that the method leads to a significantly improved convergence, enabling more accurate and efficient modeling of open 3D nanophotonic structures.

  20. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.

  1. Radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper we analyze the accuracy and efficiency of several radiative transfer models for inferring cloud parameters from radiances measured by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR). The radiative transfer models are the exact discrete ordinate and matrix operator methods with matrix exponential, and the approximate asymptotic and equivalent Lambertian cloud models. To deal with the computationally expensive radiative transfer calculations, several acceleration techniques such as, for example, the telescoping technique, the method of false discrete ordinate, the correlated k-distribution method and the principal component analysis (PCA) are used. We found that, for the EPIC oxygen A-band absorption channel at 764 nm, the exact models using the correlated k-distribution in conjunction with PCA yield an accuracy better than 1.5% and a computation time of 18 s for radiance calculations at 5 viewing zenith angles.

  2. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O(kn) time, where k denotes the current number of centers. Traditional techniques for accelerating nearest neighbor searching involve storing the k centers in a data structure. However, because of the iterative nature of the algorithm, this data structure would need to be rebuilt with each new iteration. Our approach is to store the data points in a kd-tree data structure. The assignment of points to nearest neighbors is carried out by a filtering process, which successively eliminates centers that can not possibly be the nearest neighbor for a given region of space. This algorithm is significantly faster, because large groups of data points can be assigned to their nearest center in a single operation. Preliminary results on a number of real Landsat datasets show that our revised ISOCLUS-like scheme runs about twice as fast.

  3. The graphene oxide membrane immersing in the aqueous solution studied by electrochemical impedance spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjing; Chen, Zhe; Yao, Lei; Wang, Xiao; Fu, Ping; Lin, Zhidong

    2018-04-01

    The interlayer spacing of graphene oxide (GO) is a key property for GO membrane. To probe the variation of interlayer spacing of the GO membrane immersing in KCl aqueous solution, electrochemical impedance spectroscopy (EIS), x-ray diffraction (XRD) and computational calculation was utilized in this study. The XRD patterns show that soaking in KCl aqueous solution leads to an increase of interlayer spacing of GO membrane. And the EIS results indicate that during the immersing process, the charge transfer resistance of GO membrane decreases first and then increases. Computational calculation confirms that intercalated water molecules can result in an increase of interlayer spacing of GO membrane, while the permeation of K+ ions would lead to a decrease of interlayer spacing. All the results are in agreement with each other. It suggests that during the immersing process, the interlayer spacing of GO enlarges first and then decreases. EIS can be a promisingly online method for examining the interlayer spacing of GO in the aqueous solution.

  4. Integrable deformations of the Gk1 ×Gk2 /Gk1+k2 coset CFTs

    NASA Astrophysics Data System (ADS)

    Sfetsos, Konstantinos; Siampos, Konstantinos

    2018-02-01

    We study the effective action for the integrable λ-deformation of the Gk1 ×Gk2 /Gk1+k2 coset CFTs. For unequal levels theses models do not fall into the general discussion of λ-deformations of CFTs corresponding to symmetric spaces and have many attractive features. We show that the perturbation is driven by parafermion bilinears and we revisit the derivation of their algebra. We uncover a non-trivial symmetry of these models parametric space, which has not encountered before in the literature. Using field theoretical methods and the effective action we compute the exact in the deformation parameter β-function and explicitly demonstrate the existence of a fixed point in the IR corresponding to the Gk1-k2 ×Gk2 /Gk1 coset CFTs. The same result is verified using gravitational methods for G = SU (2). We examine various limiting cases previously considered in the literature and found agreement.

  5. Large-scale kinetic energy spectra from Eulerian analysis of EOLE wind data

    NASA Technical Reports Server (NTRS)

    Desbois, M.

    1975-01-01

    A data set of 56,000 winds determined from the horizontal displacements of EOLE balloons at the 200 mb level in the Southern Hemisphere during the period October 1971-February 1972 is utilized for the computation of planetary- and synoptic-scale kinetic energy space spectra. However, the random distribution of measurements in space and time presents some problems for the spectral analysis. Two different approaches are used, i.e., a harmonic analysis of daily wind values at equi-distant points obtained by space-time interpolation of the data, and a correlation method using the direct measurements. Both methods give similar results for small wavenumbers, but the second is more accurate for higher wavenumbers (k above or equal to 10). The spectra show a maximum at wavenumbers 5 and 6 due to baroclinic instability and then decrease for high wavenumbers up to wavenumber 35 (which is the limit of the analysis), according to the inverse power law k to the negative p, with p close to 3.

  6. Big Data Meets Quantum Chemistry Approximations: The Δ-Machine Learning Approach.

    PubMed

    Ramakrishnan, Raghunathan; Dral, Pavlo O; Rupp, Matthias; von Lilienfeld, O Anatole

    2015-05-12

    Chemically accurate and comprehensive studies of the virtual space of all possible molecules are severely limited by the computational cost of quantum chemistry. We introduce a composite strategy that adds machine learning corrections to computationally inexpensive approximate legacy quantum methods. After training, highly accurate predictions of enthalpies, free energies, entropies, and electron correlation energies are possible, for significantly larger molecular sets than used for training. For thermochemical properties of up to 16k isomers of C7H10O2 we present numerical evidence that chemical accuracy can be reached. We also predict electron correlation energy in post Hartree-Fock methods, at the computational cost of Hartree-Fock, and we establish a qualitative relationship between molecular entropy and electron correlation. The transferability of our approach is demonstrated, using semiempirical quantum chemistry and machine learning models trained on 1 and 10% of 134k organic molecules, to reproduce enthalpies of all remaining molecules at density functional theory level of accuracy.

  7. Advanced nodal neutron diffusion method with space-dependent cross sections: ILLICO-VX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajic, H.L.; Ougouag, A.M.

    1987-01-01

    Advanced transverse integrated nodal methods for neutron diffusion developed since the 1970s require that node- or assembly-homogenized cross sections be known. The underlying structural heterogeneity can be accurately accounted for in homogenization procedures by the use of heterogeneity or discontinuity factors. Other (milder) types of heterogeneity, burnup-induced or due to thermal-hydraulic feedback, can be resolved by explicitly accounting for the spatial variations of material properties. This can be done during the nodal computations via nonlinear iterations. The new method has been implemented in the code ILLICO-VX (ILLICO variable cross-section method). Numerous numerical tests were performed. As expected, the convergence ratemore » of ILLICO-VX is lower than that of ILLICO, requiring approx. 30% more outer iterations per k/sub eff/ computation. The methodology has also been implemented as the NOMAD-VX option of the NOMAD, multicycle, multigroup, two- and three-dimensional nodal diffusion depletion code. The burnup-induced heterogeneities (space dependence of cross sections) are calculated during the burnup steps.« less

  8. Fast inverse distance weighting-based spatiotemporal interpolation: a web-based application of interpolating daily fine particulate matter PM2:5 in the contiguous U.S. using parallel programming and k-d tree.

    PubMed

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-09-03

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.

  9. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    PubMed Central

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-01-01

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146

  10. Fast Time-Dependent Density Functional Theory Calculations of the X-ray Absorption Spectroscopy of Large Systems.

    PubMed

    Besley, Nicholas A

    2016-10-11

    The computational cost of calculations of K-edge X-ray absorption spectra using time-dependent density functional (TDDFT) within the Tamm-Dancoff approximation is significantly reduced through the introduction of a severe integral screening procedure that includes only integrals that involve the core s basis function of the absorbing atom(s) coupled with a reduced quality numerical quadrature for integrals associated with the exchange and correlation functionals. The memory required for the calculations is reduced through construction of the TDDFT matrix within the absorbing core orbitals excitation space and exploiting further truncation of the virtual orbital space. The resulting method, denoted fTDDFTs, leads to much faster calculations and makes the study of large systems tractable. The capability of the method is demonstrated through calculations of the X-ray absorption spectra at the carbon K-edge of chlorophyll a, C 60 and C 70 .

  11. The computational complexity of elliptic curve integer sub-decomposition (ISD) method

    NASA Astrophysics Data System (ADS)

    Ajeena, Ruma Kareem K.; Kamarulhaili, Hailiza

    2014-07-01

    The idea of the GLV method of Gallant, Lambert and Vanstone (Crypto 2001) is considered a foundation stone to build a new procedure to compute the elliptic curve scalar multiplication. This procedure, that is integer sub-decomposition (ISD), will compute any multiple kP of elliptic curve point P which has a large prime order n with two low-degrees endomorphisms ψ1 and ψ2 of elliptic curve E over prime field Fp. The sub-decomposition of values k1 and k2, not bounded by ±C√n , gives us new integers k11, k12, k21 and k22 which are bounded by ±C√n and can be computed through solving the closest vector problem in lattice. The percentage of a successful computation for the scalar multiplication increases by ISD method, which improved the computational efficiency in comparison with the general method for computing scalar multiplication in elliptic curves over the prime fields. This paper will present the mechanism of ISD method and will shed light mainly on the computation complexity of the ISD approach that will be determined by computing the cost of operations. These operations include elliptic curve operations and finite field operations.

  12. Comparison Analysis of Recognition Algorithms of Forest-Cover Objects on Hyperspectral Air-Borne and Space-Borne Images

    NASA Astrophysics Data System (ADS)

    Kozoderov, V. V.; Kondranin, T. V.; Dmitriev, E. V.

    2017-12-01

    The basic model for the recognition of natural and anthropogenic objects using their spectral and textural features is described in the problem of hyperspectral air-borne and space-borne imagery processing. The model is based on improvements of the Bayesian classifier that is a computational procedure of statistical decision making in machine-learning methods of pattern recognition. The principal component method is implemented to decompose the hyperspectral measurements on the basis of empirical orthogonal functions. Application examples are shown of various modifications of the Bayesian classifier and Support Vector Machine method. Examples are provided of comparing these classifiers and a metrical classifier that operates on finding the minimal Euclidean distance between different points and sets in the multidimensional feature space. A comparison is also carried out with the " K-weighted neighbors" method that is close to the nonparametric Bayesian classifier.

  13. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    PubMed

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  14. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental tool in numerous image processing and remote sensing applications. For example, unsupervised clustering is often used to obtain vegetation maps of an area of interest. This approach is useful when reliable training data are either scarce or expensive, and when relatively little a priori information about the data is available. Unsupervised clustering methods play a significant role in the pursuit of unsupervised classification. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points (or samples) in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute a set of cluster centers in d-space. Although there is no specific optimization criterion, the algorithm is similar in spirit to the well known k-means clustering method in which the objective is to minimize the average squared distance of each point to its nearest center, called the average distortion. One significant feature of ISOCLUS over k-means is that clusters may be merged or split, and so the final number of clusters may be different from the number k supplied as part of the input. This algorithm will be described in later in this paper. The ISOCLUS algorithm can run very slowly, particularly on large data sets. Given its wide use in remote sensing, its efficient computation is an important goal. We have developed a fast implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm, the filtering algorithm, by Kanungo et al.. They showed that, by storing the data in a kd-tree, it was possible to significantly reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm. For technical reasons, which are explained later, it is necessary to make a minor modification to the ISOCLUS specification. We provide empirical evidence, on both synthetic and Landsat image data sets, that our algorithm's performance is essentially the same as that of ISOCLUS, but with significantly lower running times. We show that our algorithm runs from 3 to 30 times faster than a straightforward implementation of ISOCLUS. Our adaptation of the filtering algorithm involves the efficient computation of a number of cluster statistics that are needed for ISOCLUS, but not for k-means.

  15. Wavelength calibration of dispersive near-infrared spectrometer using relative k-space distribution with low coherence interferometer

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2016-05-01

    The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.

  16. Light scattering and absorption by space weathered planetary bodies: Novel numerical solution

    NASA Astrophysics Data System (ADS)

    Markkanen, Johannes; Väisänen, Timo; Penttilä, Antti; Muinonen, Karri

    2017-10-01

    Airless planetary bodies are exposed to space weathering, i.e., energetic electromagnetic and particle radiation, implantation and sputtering from solar wind particles, and micrometeorite bombardment.Space weathering is known to alter the physical and chemical composition of the surface of an airless body (C. Pieters et al., J. Geophys. Res. Planets, 121, 2016). From the light scattering perspective, one of the key effects is the production of nanophase iron (npFe0) near the exposed surfaces (B. Hapke, J. Geophys. Res., 106, E5, 2001). At visible and ultraviolet wavelengths these particles have a strong electromagnetic response which has a major impact on scattering and absorption features. Thus, to interpret the spectroscopic observations of space-weathered asteroids, the model should treat the contributions of the npFe0 particles rigorously.Our numerical approach is based on the hierarchical geometric optics (GO) and radiative transfer (RT). The modelled asteroid is assumed to consist of densely packed silicate grains with npFe0 inclusions. We employ our recently developed RT method for dense random media (K. Muinonen, et al., Radio Science, submitted, 2017) to compute the contributions of the npFe0 particles embedded in silicate grains. The dense media RT method requires computing interactions of the npFe0 particles in the volume element for which we use the exact fast superposition T-matrix method (J. Markkanen, and A.J. Yuffa, JQSRT 189, 2017). Reflections and refractions on the grain surface and propagation in the grain are addressed by the GO. Finally, the standard RT is applied to compute scattering by the entire asteroid.Our numerical method allows for a quantitative interpretation of the spectroscopic observations of space-weathered asteroids. In addition, it may be an important step towards more rigorous a thermophysical model of asteroids when coupled with the radiative and conductive heat transfer techniques.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL. Computational resources provided by CSC- IT Centre for Science Ltd, Finland.

  17. Consequences of using nonlinear particle trajectories to compute spatial diffusion coefficients. [for charged particles in interplanetary space

    NASA Technical Reports Server (NTRS)

    Goldstein, M. L.

    1976-01-01

    The propagation of charged particles through interstellar and interplanetary space has often been described as a random process in which the particles are scattered by ambient electromagnetic turbulence. In general, this changes both the magnitude and direction of the particles' momentum. Some situations for which scattering in direction (pitch angle) is of primary interest were studied. A perturbed orbit, resonant scattering theory for pitch-angle diffusion in magnetostatic turbulence was slightly generalized and then utilized to compute the diffusion coefficient for spatial propagation parallel to the mean magnetic field, Kappa. All divergences inherent in the quasilinear formalism when the power spectrum of the fluctuation field falls off as K to the minus Q power (Q less than 2) were removed. Various methods of computing Kappa were compared and limits on the validity of the theory discussed. For Q less than 1 or 2, the various methods give roughly comparable values of Kappa, but use of perturbed orbits systematically results in a somewhat smaller Kappa than can be obtained from quasilinear theory.

  18. Improved Plane-Wave Ultrasound Beamforming by Incorporating Angular Weighting and Coherent Compounding in Fourier Domain.

    PubMed

    Chen, Chuan; Hendriks, Gijs A G M; van Sloun, Ruud J G; Hansen, Hendrik H G; de Korte, Chris L

    2018-05-01

    In this paper, a novel processing framework is introduced for Fourier-domain beamforming of plane-wave ultrasound data, which incorporates coherent compounding and angular weighting in the Fourier domain. Angular weighting implies spectral weighting by a 2-D steering-angle-dependent filtering template. The design of this filter is also optimized as part of this paper. Two widely used Fourier-domain plane-wave ultrasound beamforming methods, i.e., Lu's f-k and Stolt's f-k methods, were integrated in the framework. To enable coherent compounding in Fourier domain for the Stolt's f-k method, the original Stolt's f-k method was modified to achieve alignment of the spectra for different steering angles in k-space. The performance of the framework was compared for both methods with and without angular weighting using experimentally obtained data sets (phantom and in vivo), and data sets (phantom) provided by the IEEE IUS 2016 plane-wave beamforming challenge. The addition of angular weighting enhanced the image contrast while preserving image resolution. This resulted in images of equal quality as those obtained by conventionally used delay-and-sum (DAS) beamforming with apodization and coherent compounding. Given the lower computational load of the proposed framework compared to DAS, to our knowledge it can, therefore, be concluded that it outperforms commonly used beamforming methods such as Stolt's f-k, Lu's f-k, and DAS.

  19. Analysis of genetic population structure in Acacia caven (Leguminosae, Mimosoideae), comparing one exploratory and two Bayesian-model-based methods.

    PubMed

    Pometti, Carolina L; Bessega, Cecilia F; Saidman, Beatriz O; Vilardi, Juan C

    2014-03-01

    Bayesian clustering as implemented in STRUCTURE or GENELAND software is widely used to form genetic groups of populations or individuals. On the other hand, in order to satisfy the need for less computer-intensive approaches, multivariate analyses are specifically devoted to extracting information from large datasets. In this paper, we report the use of a dataset of AFLP markers belonging to 15 sampling sites of Acacia caven for studying the genetic structure and comparing the consistency of three methods: STRUCTURE, GENELAND and DAPC. Of these methods, DAPC was the fastest one and showed accuracy in inferring the K number of populations (K = 12 using the find.clusters option and K = 15 with a priori information of populations). GENELAND in turn, provides information on the area of membership probabilities for individuals or populations in the space, when coordinates are specified (K = 12). STRUCTURE also inferred the number of K populations and the membership probabilities of individuals based on ancestry, presenting the result K = 11 without prior information of populations and K = 15 using the LOCPRIOR option. Finally, in this work all three methods showed high consistency in estimating the population structure, inferring similar numbers of populations and the membership probabilities of individuals to each group, with a high correlation between each other.

  20. The use of computer models to predict temperature and smoke movement in high bay spaces

    NASA Technical Reports Server (NTRS)

    Notarianni, Kathy A.; Davis, William D.

    1993-01-01

    The Building and Fire Research Laboratory (BFRL) was given the opportunity to make measurements during fire calibration tests of the heat detection system in an aircraft hangar with a nominal 30.4 (100 ft) ceiling height near Dallas, TX. Fire gas temperatures resulting from an approximately 8250 kW isopropyl alcohol pool fire were measured above the fire and along the ceiling. The results of the experiments were then compared to predictions from the computer fire models DETACT-QS, FPETOOL, and LAVENT. In section A of the analysis conducted, DETACT-QS AND FPETOOL significantly underpredicted the gas temperature. LAVENT at the position below the ceiling corresponding to maximum temperature and velocity provided better agreement with the data. For large spaces, hot gas transport time and an improved fire plume dynamics model should be incorporated into the computer fire model activation routines. A computational fluid dynamics (CFD) model, HARWELL FLOW3D, was then used to model the hot gas movement in the space. Reasonable agreement was found between the temperatures predicted from the CFD calculations and the temperatures measured in the aircraft hangar. In section B, an existing NASA high bay space was modeled using the CFD model. The NASA space was a clean room, 27.4 m (90 ft) high with forced horizontal laminar flow. The purpose of this analysis is to determine how the existing fire detection devices would respond to various size fires in the space. The analysis was conducted for 32 MW, 400 kW, and 40 kW fires.

  1. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low-Altitude VLF Transmitter

    DTIC Science & Technology

    2007-08-31

    latitude) for 3 different grid spacings. 14 8. Low-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...excellent, validating the new FD code. 16 9. High-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...again excellent. 17 10. Low-altitude fields produced by a 20-k.Hz source computed using the FD and TD codes. 17 11. High-altitude fields produced

  2. Experimental and Computational Investigations of Phase Change Thermal Energy Storage Canisters

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir; Kerslake, Thomas; Sokolov, Pavel; Tolbert, Carol

    1996-01-01

    Two sets of experimental data are examined in this paper, ground and space experiments, for cylindrical canisters with thermal energy storage applications. A 2-D computational model was developed for unsteady heat transfer (conduction and radiation) with phase-change. The radiation heat transfer employed a finite volume method. The following was found in this study: (1) Ground Experiments: the convection heat transfer is equally important to that of the radiation heat transfer; radiation heat transfer in the liquid is found to be more significant than that in the void; including the radiation heat transfer in the liquid resulted in lower temperatures (about 15 K) and increased the melting time (about 10 min.); generally, most of the heat flow takes place in the radial direction. (2) Space Experiments: radiation heat transfer in the void is found to be more significant than that in the liquid (exactly the opposite to the Ground Experiments); accordingly, the location and size of the void affects the performance considerably; including the radiation heat transfer in the void resulted in lower temperatures (about 40 K).

  3. HFEM3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, Chester J

    Software solves the three-dimensional Poisson equation div(k(grad(u)) = f, by the finite element method for the case when material properties, k, are distributed over hierarchy of edges, facets and tetrahedra in the finite element mesh. Method is described in Weiss, CJ, Finite element analysis for model parameters distributed on a hierarchy of geometric simplices, Geophysics, v82, E155-167, doi:10.1190/GEO2017-0058.1 (2017). A standard finite element method for solving Poisson’s equation is augmented by including in the 3D stiffness matrix additional 2D and 1D stiffness matrices representing the contributions from material properties associated with mesh faces and edges, respectively. The resulting linear systemmore » is solved iteratively using the conjugate gradient method with Jacobi preconditioning. To minimize computer storage for program execution, the linear solver computes matrix-vector contractions element-by-element over the mesh, without explicit storage of the global stiffness matrix. Program output vtk compliant for visualization and rendering by 3rd party software. Program uses dynamic memory allocation and as such there are no hard limits on problem size outside of those imposed by the operating system and configuration on which the software is run. Dimension, N, of the finite element solution vector is constrained by the the addressable space in 32-vs-64 bit operating systems. Total storage requirements for the problem. Total working space required for the program is approximately 13*N double precision words.« less

  4. pK(A) in proteins solving the Poisson-Boltzmann equation with finite elements.

    PubMed

    Sakalli, Ilkay; Knapp, Ernst-Walter

    2015-11-05

    Knowledge on pK(A) values is an eminent factor to understand the function of proteins in living systems. We present a novel approach demonstrating that the finite element (FE) method of solving the linearized Poisson-Boltzmann equation (lPBE) can successfully be used to compute pK(A) values in proteins with high accuracy as a possible replacement to finite difference (FD) method. For this purpose, we implemented the software molecular Finite Element Solver (mFES) in the framework of the Karlsberg+ program to compute pK(A) values. This work focuses on a comparison between pK(A) computations obtained with the well-established FD method and with the new developed FE method mFES, solving the lPBE using protein crystal structures without conformational changes. Accurate and coarse model systems are set up with mFES using a similar number of unknowns compared with the FD method. Our FE method delivers results for computations of pK(A) values and interaction energies of titratable groups, which are comparable in accuracy. We introduce different thermodynamic cycles to evaluate pK(A) values and we show for the FE method how different parameters influence the accuracy of computed pK(A) values. © 2015 Wiley Periodicals, Inc.

  5. Spacecraft Charging Standard Report.

    DTIC Science & Technology

    1980-09-30

    SSPM include: SAMPLE POTENTIAL (with respect to S/C ground) Aluminized Kapton -2.0 kV Silvered Teflon -4.0 kV Astroquartz -3.7 kV 50.3 Analysis. As...and potential gradients on the space vehicle (candidate spacecraft locations for ESD tests) (The NASCAP computer code, when validated, will be useful...The coupling analysis should then determine as a minimum: I. electromagnetic fields generated interior to the space vehicle due to ESD 2. induced

  6. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Du; Yang, Weitao

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

  7. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    DOE PAGES

    Zhang, Du; Yang, Weitao

    2016-10-13

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

  8. Refining, revising, augmenting, compiling and developing computer assisted instruction K-12 aerospace materials for implementation in NASA spacelink electronic information system

    NASA Technical Reports Server (NTRS)

    Blake, Jean A.

    1988-01-01

    The NASA Spacelink is an electronic information service operated by the Marshall Space Flight Center. The Spacelink contains extensive NASA news and educational resources that can be accessed by a computer and modem. Updates and information are provided on: current NASA news; aeronautics; space exploration: before the Shuttle; space exploration: the Shuttle and beyond; NASA installations; NASA educational services; materials for classroom use; and space program spinoffs.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khodam-Mohammadi, A.; Monshizadeh, M.

    We give a review of the existence of Taub-NUT/bolt solutions in Einstein Gauss-Bonnet gravity with the parameter {alpha} in six dimensions. Although the spacetime with base space S{sup 2}xS{sup 2} has a curvature singularity at r=N, which does not admit NUT solutions, we may proceed with the same computations as in the CP{sup 2} case. The investigation of thermodynamics of NUT/bolt solutions in six dimensions is carried out. We compute the finite action, mass, entropy, and temperature of the black hole. Then the validity of the first law of thermodynamics is demonstrated. It is shown that in NUT solutions allmore » thermodynamic quantities for both base spaces are related to each other by substituting {alpha}{sup CP{sup k}}=[(k+1)/k]{alpha}{sup S{sup 2}}{sup xS{sup 2}}{sup x...S{sub k}{sup 2}}. So, no further information is given by investigating NUT solutions in the S{sup 2}xS{sup 2} case. This relation is not true for bolt solutions. A generalization of the thermodynamics of black holes to arbitrary even dimensions is made using a new method based on the Gibbs-Duhem relation and Gibbs free energy for NUT solutions. According to this method, the finite action in Einstein Gauss-Bonnet is obtained by considering the generalized finite action in Einstein gravity with an additional term as a function of {alpha}. Stability analysis is done by investigating the heat capacity and entropy in the allowed range of {alpha}, {lambda}, and N. For NUT solutions in d dimensions, there exists a stable phase at a narrow range of {alpha}. In six-dimensional bolt solutions, the metric is completely stable for B=S{sup 2}xS{sup 2} and is completely unstable for the B=CP{sup 2} case.« less

  10. ICE-COLA: towards fast and accurate synthetic galaxy catalogues optimizing a quasi-N-body method

    NASA Astrophysics Data System (ADS)

    Izard, Albert; Crocce, Martin; Fosalba, Pablo

    2016-07-01

    Next generation galaxy surveys demand the development of massive ensembles of galaxy mocks to model the observables and their covariances, what is computationally prohibitive using N-body simulations. COmoving Lagrangian Acceleration (COLA) is a novel method designed to make this feasible by following an approximate dynamics but with up to three orders of magnitude speed-ups when compared to an exact N-body. In this paper, we investigate the optimization of the code parameters in the compromise between computational cost and recovered accuracy in observables such as two-point clustering and halo abundance. We benchmark those observables with a state-of-the-art N-body run, the MICE Grand Challenge simulation. We find that using 40 time-steps linearly spaced since zI ˜ 20, and a force mesh resolution three times finer than that of the number of particles, yields a matter power spectrum within 1 per cent for k ≲ 1 h Mpc-1 and a halo mass function within 5 per cent of those in the N-body. In turn, the halo bias is accurate within 2 per cent for k ≲ 0.7 h Mpc-1 whereas, in redshift space, the halo monopole and quadrupole are within 4 per cent for k ≲ 0.4 h Mpc-1. These results hold for a broad range in redshift (0 < z < 1) and for all halo mass bins investigated (M > 1012.5 h-1 M⊙). To bring accuracy in clustering to one per cent level we study various methods that re-calibrate halo masses and/or velocities. We thus propose an optimized choice of COLA code parameters as a powerful tool to optimally exploit future galaxy surveys.

  11. Journal of Naval Science. Volume 2, Number 2. April 1976

    DTIC Science & Technology

    1976-04-01

    with bold lines to permit reduction in block making. A recent photograph and biographical note of the Author(s) will also be welcomed. Views and...Research Laboratory and of the Naval Under- water Systems Center aboard. The U.S. National Aeronautics and Space Administration ( NASA ) provided...F. Garcia. Fault Isolation Computer Methods. NASA Contractor Report CR-1758. February 1971. "•> P. A. Payne. D. R. Towill and K. J. Baker

  12. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function

    PubMed Central

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2015-01-01

    Purpose MRI-guided interventions demand high frame-rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Methods Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real-time to interactively de-blur spiral images. Results Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 minutes of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. Conclusions This real-time distortion correction framework will enable the use of these high frame-rate imaging methods for MRI-guided interventions. PMID:26114951

  13. Optimization and validation of accelerated golden-angle radial sparse MRI reconstruction with self-calibrating GRAPPA operator gridding.

    PubMed

    Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li

    2018-07-01

    Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Efficient grid-based techniques for density functional theory

    NASA Astrophysics Data System (ADS)

    Rodriguez-Hernandez, Juan Ignacio

    Understanding the chemical and physical properties of molecules and materials at a fundamental level often requires quantum-mechanical models for these substance's electronic structure. This type of many body quantum mechanics calculation is computationally demanding, hindering its application to substances with more than a few hundreds atoms. The supreme goal of many researches in quantum chemistry---and the topic of this dissertation---is to develop more efficient computational algorithms for electronic structure calculations. In particular, this dissertation develops two new numerical integration techniques for computing molecular and atomic properties within conventional Kohn-Sham-Density Functional Theory (KS-DFT) of molecular electronic structure. The first of these grid-based techniques is based on the transformed sparse grid construction. In this construction, a sparse grid is generated in the unit cube and then mapped to real space according to the pro-molecular density using the conditional distribution transformation. The transformed sparse grid was implemented in program deMon2k, where it is used as the numerical integrator for the exchange-correlation energy and potential in the KS-DFT procedure. We tested our grid by computing ground state energies, equilibrium geometries, and atomization energies. The accuracy on these test calculations shows that our grid is more efficient than some previous integration methods: our grids use fewer points to obtain the same accuracy. The transformed sparse grids were also tested for integrating, interpolating and differentiating in different dimensions (n = 1,2,3,6). The second technique is a grid-based method for computing atomic properties within QTAIM. It was also implemented in deMon2k. The performance of the method was tested by computing QTAIM atomic energies, charges, dipole moments, and quadrupole moments. For medium accuracy, our method is the fastest one we know of.

  15. Least-squares Legendre spectral element solutions to sound propagation problems.

    PubMed

    Lin, W H

    2001-02-01

    This paper presents a novel algorithm and numerical results of sound wave propagation. The method is based on a least-squares Legendre spectral element approach for spatial discretization and the Crank-Nicolson [Proc. Cambridge Philos. Soc. 43, 50-67 (1947)] and Adams-Bashforth [D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (CBMS-NSF Monograph, Siam 1977)] schemes for temporal discretization to solve the linearized acoustic field equations for sound propagation. Two types of NASA Computational Aeroacoustics (CAA) Workshop benchmark problems [ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics, edited by J. C. Hardin, J. R. Ristorcelli, and C. K. W. Tam, NASA Conference Publication 3300, 1995a] are considered: a narrow Gaussian sound wave propagating in a one-dimensional space without flows, and the reflection of a two-dimensional acoustic pulse off a rigid wall in the presence of a uniform flow of Mach 0.5 in a semi-infinite space. The first problem was used to examine the numerical dispersion and dissipation characteristics of the proposed algorithm. The second problem was to demonstrate the capability of the algorithm in treating sound propagation in a flow. Comparisons were made of the computed results with analytical results and results obtained by other methods. It is shown that all results computed by the present method are in good agreement with the analytical solutions and results of the first problem agree very well with those predicted by other schemes.

  16. Two-dimensional Euler and Navier-Stokes Time accurate simulations of fan rotor flows

    NASA Technical Reports Server (NTRS)

    Boretti, A. A.

    1990-01-01

    Two numerical methods are presented which describe the unsteady flow field in the blade-to-blade plane of an axial fan rotor. These methods solve the compressible, time-dependent, Euler and the compressible, turbulent, time-dependent, Navier-Stokes conservation equations for mass, momentum, and energy. The Navier-Stokes equations are written in Favre-averaged form and are closed with an approximate two-equation turbulence model with low Reynolds number and compressibility effects included. The unsteady aerodynamic component is obtained by superposing inflow or outflow unsteadiness to the steady conditions through time-dependent boundary conditions. The integration in space is performed by using a finite volume scheme, and the integration in time is performed by using k-stage Runge-Kutta schemes, k = 2,5. The numerical integration algorithm allows the reduction of the computational cost of an unsteady simulation involving high frequency disturbances in both CPU time and memory requirements. Less than 200 sec of CPU time are required to advance the Euler equations in a computational grid made up of about 2000 grid during 10,000 time steps on a CRAY Y-MP computer, with a required memory of less than 0.3 megawords.

  17. An efficient shooting algorithm for Evans function calculations in large systems

    NASA Astrophysics Data System (ADS)

    Humpherys, Jeffrey; Zumbrun, Kevin

    2006-08-01

    In Evans function computations of the spectra of asymptotically constant-coefficient linear operators, a basic issue is the efficient and numerically stable computation of subspaces evolving according to the associated eigenvalue ODE. For small systems, a fast, shooting algorithm may be obtained by representing subspaces as single exterior products [J.C. Alexander, R. Sachs, Linear instability of solitary waves of a Boussinesq-type equation: A computer assisted computation, Nonlinear World 2 (4) (1995) 471-507; L.Q. Brin, Numerical testing of the stability of viscous shock waves, Ph.D. Thesis, Indiana University, Bloomington, 1998; L.Q. Brin, Numerical testing of the stability of viscous shock waves, Math. Comp. 70 (235) (2001) 1071-1088; L.Q. Brin, K. Zumbrun, Analytically varying eigenvectors and the stability of viscous shock waves, in: Seventh Workshop on Partial Differential Equations, Part I, 2001, Rio de Janeiro, Mat. Contemp. 22 (2002) 19-32; T.J. Bridges, G. Derks, G. Gottwald, Stability and instability of solitary waves of the fifth-order KdV equation: A numerical framework, Physica D 172 (1-4) (2002) 190-216]. For large systems, however, the dimension of the exterior-product space quickly becomes prohibitive, growing as (n/k), where n is the dimension of the system written as a first-order ODE and k (typically ˜n/2) is the dimension of the subspace. We resolve this difficulty by the introduction of a simple polar coordinate algorithm representing “pure” (monomial) products as scalar multiples of orthonormal bases, for which the angular equation is a numerically optimized version of the continuous orthogonalization method of Drury-Davey [A. Davey, An automatic orthonormalization method for solving stiff boundary value problems, J. Comput. Phys. 51 (2) (1983) 343-356; L.O. Drury, Numerical solution of Orr-Sommerfeld-type equations, J. Comput. Phys. 37 (1) (1980) 133-139] and the radial equation is evaluable by quadrature. Notably, the polar-coordinate method preserves the important property of analyticity with respect to parameters.

  18. On a class of Newton-like methods for solving nonlinear equations

    NASA Astrophysics Data System (ADS)

    Argyros, Ioannis K.

    2009-06-01

    We provide a semilocal convergence analysis for a certain class of Newton-like methods considered also in [I.K. Argyros, A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach space, J. Math. Anal. Appl. 298 (2004) 374-397; I.K. Argyros, Computational theory of iterative methods, in: C.K. Chui, L. Wuytack (Eds.), Series: Studies in Computational Mathematics, vol. 15, Elsevier Publ. Co, New York, USA, 2007; J.E. Dennis, Toward a unified convergence theory for Newton-like methods, in: L.B. Rall (Ed.), Nonlinear Functional Analysis and Applications, Academic Press, New York, 1971], in order to approximate a locally unique solution of an equation in a Banach space. Using a combination of Lipschitz and center-Lipschitz conditions, instead of only Lipschitz conditions [F.A. Potra, Sharp error bounds for a class of Newton-like methods, Libertas Math. 5 (1985) 71-84], we provide an analysis with the following advantages over the work in [F.A. Potra, Sharp error bounds for a class of Newton-like methods, Libertas Math. 5 (1985) 71-84] which improved the works in [W.E. Bosarge, P.L. Falb, A multipoint method of third order, J. Optimiz. Theory Appl. 4 (1969) 156-166; W.E. Bosarge, P.L. Falb, Infinite dimensional multipoint methods and the solution of two point boundary value problems, Numer. Math. 14 (1970) 264-286; J.E. Dennis, On the Kantorovich hypothesis for Newton's method, SIAM J. Numer. Anal. 6 (3) (1969) 493-507; J.E. Dennis, Toward a unified convergence theory for Newton-like methods, in: L.B. Rall (Ed.), Nonlinear Functional Analysis and Applications, Academic Press, New York, 1971; H.J. Kornstaedt, Ein allgemeiner Konvergenzstaz fü r verschä rfte Newton-Verfahrem, in: ISNM, vol. 28, Birkhaü ser Verlag, Basel and Stuttgart, 1975, pp. 53-69; P. Laasonen, Ein überquadratisch konvergenter iterativer algorithmus, Ann. Acad. Sci. Fenn. Ser I 450 (1969) 1-10; F.A. Potra, On a modified secant method, L'analyse numérique et la theorie de l'approximation 8 (2) (1979) 203-214; F.A. Potra, An application of the induction method of V. Pták to the study of Regula Falsi, Aplikace Matematiky 26 (1981) 111-120; F.A. Potra, On the convergence of a class of Newton-like methods, in: Iterative Solution of Nonlinear Systems of Equations, in: Lecture Notes in Mathematics, vol. 953, Springer-Verlag, New York, 1982; F.A. Potra, V. Pták, Nondiscrete induction and double step secant method, Math. Scand. 46 (1980) 236-250; F.A. Potra, V. Pták, On a class of modified Newton processes, Numer. Funct. Anal. Optim. 2 (1) (1980) 107-120; F.A. Potra, Sharp error bounds for a class of Newton-like methods, Libertas Math. 5 (1985) 71-84; J.W. Schmidt, Untere Fehlerschranken für Regula-Falsi Verfahren, Period. Math. Hungar. 9 (3) (1978) 241-247; J.W. Schmidt, H. Schwetlick, Ableitungsfreie Verfhren mit höherer Konvergenzgeschwindifkeit, Computing 3 (1968) 215-226; J.F. Traub, Iterative Methods for the Solution of Equations, Prentice Hall, Englewood Cliffs, New Jersey, 1964; M.A. Wolfe, Extended iterative methods for the solution of operator equations, Numer. Math. 31 (1978) 153-174]: larger convergence domain and weaker sufficient convergence conditions. Numerical examples further validating the results are also provided.

  19. Computer-aided Assessment of Regional Abdominal Fat with Food Residue Removal in CT

    PubMed Central

    Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi

    2014-01-01

    Rationale and Objectives Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Materials and Methods Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. Results We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Conclusions Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. PMID:24119354

  20. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  1. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  2. Computational method for determining n and k for a thin film from the measured reflectance, transmittance, and film thickness.

    PubMed

    Bennett, J M; Booty, M J

    1966-01-01

    A computational method of determining n and k for an evaporated film from the measured reflectance, transmittance, and film thickness has been programmed for an IBM 7094 computer. The method consists of modifications to the NOTS multilayer film program. The basic program computes normal incidence reflectance, transmittance, phase change on reflection, and other parameters from the optical constants and thicknesses of all materials. In the modification, n and k for the film are varied in a prescribed manner, and the computer picks from among these values one n and one k which yield reflectance and transmittance values almost equalling the measured values. Results are given for films of silicon and aluminum.

  3. Analysis of genetic population structure in Acacia caven (Leguminosae, Mimosoideae), comparing one exploratory and two Bayesian-model-based methods

    PubMed Central

    Pometti, Carolina L.; Bessega, Cecilia F.; Saidman, Beatriz O.; Vilardi, Juan C.

    2014-01-01

    Bayesian clustering as implemented in STRUCTURE or GENELAND software is widely used to form genetic groups of populations or individuals. On the other hand, in order to satisfy the need for less computer-intensive approaches, multivariate analyses are specifically devoted to extracting information from large datasets. In this paper, we report the use of a dataset of AFLP markers belonging to 15 sampling sites of Acacia caven for studying the genetic structure and comparing the consistency of three methods: STRUCTURE, GENELAND and DAPC. Of these methods, DAPC was the fastest one and showed accuracy in inferring the K number of populations (K = 12 using the find.clusters option and K = 15 with a priori information of populations). GENELAND in turn, provides information on the area of membership probabilities for individuals or populations in the space, when coordinates are specified (K = 12). STRUCTURE also inferred the number of K populations and the membership probabilities of individuals based on ancestry, presenting the result K = 11 without prior information of populations and K = 15 using the LOCPRIOR option. Finally, in this work all three methods showed high consistency in estimating the population structure, inferring similar numbers of populations and the membership probabilities of individuals to each group, with a high correlation between each other. PMID:24688293

  4. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).

  5. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  6. Scaled nonuniform Fourier transform for image reconstruction in swept source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Mezgebo, Biniyam; Nagib, Karim; Fernando, Namal; Kordi, Behzad; Sherif, Sherif

    2018-02-01

    Swept Source optical coherence tomography (SS-OCT) is an important imaging modality for both medical and industrial diagnostic applications. A cross-sectional SS-OCT image is obtained by applying an inverse discrete Fourier transform (DFT) to axial interferograms measured in the frequency domain (k-space). This inverse DFT is typically implemented as a fast Fourier transform (FFT) that requires the data samples to be equidistant in k-space. As the frequency of light produced by a typical wavelength-swept laser is nonlinear in time, the recorded interferogram samples will not be uniformly spaced in k-space. Many image reconstruction methods have been proposed to overcome this problem. Most such methods rely on oversampling the measured interferogram then use either hardware, e.g., Mach-Zhender interferometer as a frequency clock module, or software, e.g., interpolation in k-space, to obtain equally spaced samples that are suitable for the FFT. To overcome the problem of nonuniform sampling in k-space without any need for interferogram oversampling, an earlier method demonstrated the use of the nonuniform discrete Fourier transform (NDFT) for image reconstruction in SS-OCT. In this paper, we present a more accurate method for SS-OCT image reconstruction from nonuniform samples in k-space using a scaled nonuniform Fourier transform. The result is demonstrated using SS-OCT images of Axolotl salamander eggs.

  7. A transient response analysis of the space shuttle vehicle during liftoff

    NASA Technical Reports Server (NTRS)

    Brunty, J. A.

    1990-01-01

    A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.

  8. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  9. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  10. Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.

    PubMed

    Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon

    2009-01-01

    Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.

  11. Thermionic noise measurements for on-line dispenser cathode diagnostics for linear beam microwave tubes

    NASA Technical Reports Server (NTRS)

    Holland, C.; Brodie, I.

    1985-01-01

    A test stand has been set up to measure the current fluctuation noise properties of B- and M-type dispenser cathodes in a typical TWT gun structure. Noise techniques were used to determine the work function distribution on the cathode surfaces. Significant differences between the B and M types and significant changes in the work function distribution during activation and life are found. In turn, knowledge of the expected work function can be used to accurately determine the cathode-operating temperatures in a TWT structure. Noise measurements also demonstrate more sensitivity to space charge effects than the Miram method. Full automation of the measurements and computations is now required to speed up data acquisition and reduction. The complete set of equations for the space charge limited diode were programmed so that given four of the five measurable variables (J, J sub O, T, D, and V) the fifth could be computed. Using this program, we estimated that an rms fluctuation in the diode spacing d in the frequency range of 145 Hz about 20 kHz of only about 10 to the -5 power A would account for the observed noise in a space charge limited diode with 1 mm spacing.

  12. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  13. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs.

    PubMed

    Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.

  14. Border preserving skin lesion segmentation

    NASA Astrophysics Data System (ADS)

    Kamali, Mostafa; Samei, Golnoosh

    2008-03-01

    Melanoma is a fatal cancer with a growing incident rate. However it could be cured if diagnosed in early stages. The first step in detecting melanoma is the separation of skin lesion from healthy skin. There are particular features associated with a malignant lesion whose successful detection relies upon accurately extracted borders. We propose a two step approach. First, we apply K-means clustering method (to 3D RGB space) that extracts relatively accurate borders. In the second step we perform an extra refining step for detecting the fading area around some lesions as accurately as possible. Our method has a number of novelties. Firstly as the clustering method is directly applied to the 3D color space, we do not overlook the dependencies between different color channels. In addition, it is capable of extracting fine lesion borders up to pixel level in spite of the difficulties associated with fading areas around the lesion. Performing clustering in different color spaces reveals that 3D RGB color space is preferred. The application of the proposed algorithm to an extensive data-base of skin lesions shows that its performance is superior to that of existing methods both in terms of accuracy and computational complexity.

  15. Predict and Analyze Protein Glycation Sites with the mRMR and IFS Methods

    PubMed Central

    Gu, Wenxiang; Zhang, Wenyi; Wang, Jianan

    2015-01-01

    Glycation is a nonenzymatic process in which proteins react with reducing sugar molecules. The identification of glycation sites in protein may provide guidelines to understand the biological function of protein glycation. In this study, we developed a computational method to predict protein glycation sites by using the support vector machine classifier. The experimental results showed that the prediction accuracy was 85.51% and an overall MCC was 0.70. Feature analysis indicated that the composition of k-spaced amino acid pairs feature contributed the most for glycation sites prediction. PMID:25961025

  16. Chirality measures of α-amino acids.

    PubMed

    Jamróz, Michał H; Rode, Joanna E; Ostrowski, Sławomir; Lipiński, Piotr F J; Dobrowolski, Jan Cz

    2012-06-25

    To measure molecular chirality, the molecule is treated as a finite set of points in the Euclidean R(3) space supplemented by k properties, p(1)((i)), p(2)((i)), ..., p(k)((i)) assigned to the ith atom, which constitute a point in the Property P(k) space. Chirality measures are described as the distance between a molecule and its mirror image minimized over all its arbitrary orientation-preserving isometries in the R(3) × P(k) Cartesian product space. Following this formalism, different chirality measures can be estimated by taking into consideration different sets of atomic properties. Here, for α-amino acid zwitterionic structures taken from the Cambridge Structural Database and for all 1684 neutral conformers of 19 biogenic α-amino acid molecules, except glycine and cystine, found at the B3LYP/6-31G** level, chirality measures have been calculated by a CHIMEA program written in this project. It is demonstrated that there is a significant correlation between the measures determined for the α-amino acid zwitterions in crystals and the neutral forms in the gas phase. Performance of the studied chirality measures with changes of the basis set and computation method was also checked. An exemplary quantitative structure–activity relationship (QSAR) application of the chirality measures was presented by an introductory model for the benchmark Cramer data set of steroidal ligands of the sex-hormone binding globulin.

  17. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that the geometry-based FRODA occasionally sampled the pathway space of force field-based DIMS MD. For the AdK transition, the new concept of a Hausdorff-pair map enabled us to extract the molecular structural determinants responsible for differences in pathways, namely a set of conserved salt bridges whose charge-charge interactions are fully modelled in DIMS MD but not in FRODA. PSA has the potential to enhance our understanding of transition path sampling methods, validate them, and to provide a new approach to analyzing conformational transitions. PMID:26488417

  18. On the convergence of an iterative formulation of the electromagnetic scattering from an infinite grating of thin wires

    NASA Technical Reports Server (NTRS)

    Brand, J. C.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  19. Quantifying the accuracy of the tumor motion and area as a function of acceleration factor for the simulation of the dynamic keyhole magnetic resonance imaging method.

    PubMed

    Lee, Danny; Greer, Peter B; Pollock, Sean; Kim, Taeho; Keall, Paul

    2016-05-01

    The dynamic keyhole is a new MR image reconstruction method for thoracic and abdominal MR imaging. To date, this method has not been investigated with cancer patient magnetic resonance imaging (MRI) data. The goal of this study was to assess the dynamic keyhole method for the task of lung tumor localization using cine-MR images reconstructed in the presence of respiratory motion. The dynamic keyhole method utilizes a previously acquired a library of peripheral k-space datasets at similar displacement and phase (where phase is simply used to determine whether the breathing is inhale to exhale or exhale to inhale) respiratory bins in conjunction with central k-space datasets (keyhole) acquired. External respiratory signals drive the process of sorting, matching, and combining the two k-space streams for each respiratory bin, thereby achieving faster image acquisition without substantial motion artifacts. This study was the first that investigates the impact of k-space undersampling on lung tumor motion and area assessment across clinically available techniques (zero-filling and conventional keyhole). In this study, the dynamic keyhole, conventional keyhole and zero-filling methods were compared to full k-space dataset acquisition by quantifying (1) the keyhole size required for central k-space datasets for constant image quality across sixty four cine-MRI datasets from nine lung cancer patients, (2) the intensity difference between the original and reconstructed images in a constant keyhole size, and (3) the accuracy of tumor motion and area directly measured by tumor autocontouring. For constant image quality, the dynamic keyhole method, conventional keyhole, and zero-filling methods required 22%, 34%, and 49% of the keyhole size (P < 0.0001), respectively, compared to the full k-space image acquisition method. Compared to the conventional keyhole and zero-filling reconstructed images with the keyhole size utilized in the dynamic keyhole method, an average intensity difference of the dynamic keyhole reconstructed images (P < 0.0001) was minimal, and resulted in the accuracy of tumor motion within 99.6% (P < 0.0001) and the accuracy of tumor area within 98.0% (P < 0.0001) for lung tumor monitoring applications. This study demonstrates that the dynamic keyhole method is a promising technique for clinical applications such as image-guided radiation therapy requiring the MR monitoring of thoracic tumors. Based on the results from this study, the dynamic keyhole method could increase the imaging frequency by up to a factor of five compared with full k-space methods for real-time lung tumor MRI.

  20. A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium

    NASA Astrophysics Data System (ADS)

    Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand

    2014-05-01

    The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.

  1. Economic analysis of the design and fabrication of a space qualified power system

    NASA Technical Reports Server (NTRS)

    Ruselowski, G.

    1980-01-01

    An economic analysis was performed to determine the cost of the design and fabrication of a low Earth orbit, 2 kW photovoltaic/battery, space qualified power system. A commercially available computer program called PRICE (programmed review of information for costing and evaluation) was used to conduct the analysis. The sensitivity of the various cost factors to the assumptions used is discussed. Total cost of the power system was found to be $2.46 million with the solar array accounting for 70.5%. Using the assumption that the prototype becomes the flight system, 77.3% of the total cost is associated with manufacturing. Results will be used to establish whether the cost of space qualified hardware can be reduced by the incorporation of commercial design, fabrication, and quality assurance methods.

  2. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  3. GTM-Based QSAR Models and Their Applicability Domains.

    PubMed

    Gaspar, H A; Baskin, I I; Marcou, G; Horvath, D; Varnek, A

    2015-06-01

    In this paper we demonstrate that Generative Topographic Mapping (GTM), a machine learning method traditionally used for data visualisation, can be efficiently applied to QSAR modelling using probability distribution functions (PDF) computed in the latent 2-dimensional space. Several different scenarios of the activity assessment were considered: (i) the "activity landscape" approach based on direct use of PDF, (ii) QSAR models involving GTM-generated on descriptors derived from PDF, and, (iii) the k-Nearest Neighbours approach in 2D latent space. Benchmarking calculations were performed on five different datasets: stability constants of metal cations Ca(2+) , Gd(3+) and Lu(3+) complexes with organic ligands in water, aqueous solubility and activity of thrombin inhibitors. It has been shown that the performance of GTM-based regression models is similar to that obtained with some popular machine-learning methods (random forest, k-NN, M5P regression tree and PLS) and ISIDA fragment descriptors. By comparing GTM activity landscapes built both on predicted and experimental activities, we may visually assess the model's performance and identify the areas in the chemical space corresponding to reliable predictions. The applicability domain used in this work is based on data likelihood. Its application has significantly improved the model performances for 4 out of 5 datasets. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Evolutionary growth for Space Station Freedom electrical power system

    NASA Technical Reports Server (NTRS)

    Marshall, Matthew Fisk; Mclallin, Kerry; Zernic, Mike

    1989-01-01

    Over an operational lifetime of at least 30 yr, Space Station Freedom will encounter increased Space Station user requirements and advancing technologies. The Space Station electrical power system is designed with the flexibility to accommodate these emerging technologies and expert systems and is being designed with the necessary software hooks and hardware scars to accommodate increased growth demand. The electrical power system is planned to grow from the initial 75 kW up to 300 kW. The Phase 1 station will utilize photovoltaic arrays to produce the electrical power; however, for growth to 300 kW, solar dynamic power modules will be utilized. Pairs of 25 kW solar dynamic power modules will be added to the station to reach the power growth level. The addition of solar dynamic power in the growth phase places constraints in the initial Space Station systems such as guidance, navigation, and control, external thermal, truss structural stiffness, computational capabilities and storage, which must be planned-in, in order to facilitate the addition of the solar dynamic modules.

  5. K-Shell Photoabsorption and Photoionisation of Trace Elements I. Isoelectronic Sequences With Electron Number 3< or = N < or = 11

    NASA Technical Reports Server (NTRS)

    Palmeri, P.; Quinet, P.; Mendoza, C.; Bautista, M. A.; Witthoeft, M. C.; Kallman, T. R.

    2016-01-01

    Context. With the recent launching of the Hitomi X-ray space observatory, K lines and edges of chemical elements with low cosmic abundances, namely F, Na, P, Cl, K, Sc, Ti, V, Cr, Mn, Co, Cu and Zn, can be resolved and used to determine important properties of supernova remnants, galaxy clusters and accreting black holes and neutron stars.Aims. The second stage of the present ongoing project involves the computation of the accurate photoabsorption and photoionisation cross sections required to interpret the X-ray spectra of such trace elements.Methods. Depending on target complexity and computer tractability, ground-state cross sections are computed either with the close-coupling Breit-Pauli R-matrix method or with the autostructure atomic structure code in the isolated-resonance approximation. The intermediate-coupling scheme is used whenever possible. In order to determine a realistic K-edge behaviour for each species, both radiative and Auger dampings are taken into account, the latter being included in the R-matrix formalism by means of an optical potential.Results. Photoabsorption and total and partial photoionisation cross sections are reported for isoelectronic sequences with electron numbers 3< or = N< or = 11. The Na sequence (N=11) is used to estimate the contributions from configurations with a 2s hole (i.e. [2s]) and those containing 3d orbitals, which will be crucial when considering sequences with N 11.Conclusions. It is found that the [2s/u] configurations must be included in the target representations of species with N> 11 as they contribute significantly to the monotonic background of the cross section between the L and K edges. Configurations with 3d orbitals are important in rendering an accurate L edge, but they can be practically neglected in the K-edge region.

  6. Fast, exact k-space sample density compensation for trajectories composed of rotationally symmetric segments, and the SNR-optimized image reconstruction from non-Cartesian samples.

    PubMed

    Mitsouras, Dimitris; Mulkern, Robert V; Rybicki, Frank J

    2008-08-01

    A recently developed method for exact density compensation of non uniformly arranged samples relies on the analytically known cross-correlations of Fourier basis functions corresponding to the traced k-space trajectory. This method produces a linear system whose solution represents compensated samples that normalize the contribution of each independent element of information that can be expressed by the underlying trajectory. Unfortunately, linear system-based density compensation approaches quickly become computationally demanding with increasing number of samples (i.e., image resolution). Here, it is shown that when a trajectory is composed of rotationally symmetric interleaves, such as spiral and PROPELLER trajectories, this cross-correlations method leads to a highly simplified system of equations. Specifically, it is shown that the system matrix is circulant block-Toeplitz so that the linear system is easily block-diagonalized. The method is described and demonstrated for 32-way interleaved spiral trajectories designed for 256 image matrices; samples are compensated non iteratively in a few seconds by solving the small independent block-diagonalized linear systems in parallel. Because the method is exact and considers all the interactions between all acquired samples, up to a 10% reduction in reconstruction error concurrently with an up to 30% increase in signal to noise ratio are achieved compared to standard density compensation methods. (c) 2008 Wiley-Liss, Inc.

  7. Integrating Technology into K-12 School Design.

    ERIC Educational Resources Information Center

    Syvertsen, Ken

    2002-01-01

    Asserting that advanced technology in schools is no longer reserved solely for spaces such as computer labs, media centers, and libraries, discusses how technology integration affects school design, addressing areas such as installation, space and proportion, lighting, furniture, and flexibility and simplicity. (EV)

  8. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs

    PubMed Central

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410

  9. Quantifying the accuracy of the tumor motion and area as a function of acceleration factor for the simulation of the dynamic keyhole magnetic resonance imaging method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Danny; Pollock, Sean; Keall, Paul, E-mail: paul.keall@sydney.edu.au

    2016-05-15

    Purpose: The dynamic keyhole is a new MR image reconstruction method for thoracic and abdominal MR imaging. To date, this method has not been investigated with cancer patient magnetic resonance imaging (MRI) data. The goal of this study was to assess the dynamic keyhole method for the task of lung tumor localization using cine-MR images reconstructed in the presence of respiratory motion. Methods: The dynamic keyhole method utilizes a previously acquired a library of peripheral k-space datasets at similar displacement and phase (where phase is simply used to determine whether the breathing is inhale to exhale or exhale to inhale)more » respiratory bins in conjunction with central k-space datasets (keyhole) acquired. External respiratory signals drive the process of sorting, matching, and combining the two k-space streams for each respiratory bin, thereby achieving faster image acquisition without substantial motion artifacts. This study was the first that investigates the impact of k-space undersampling on lung tumor motion and area assessment across clinically available techniques (zero-filling and conventional keyhole). In this study, the dynamic keyhole, conventional keyhole and zero-filling methods were compared to full k-space dataset acquisition by quantifying (1) the keyhole size required for central k-space datasets for constant image quality across sixty four cine-MRI datasets from nine lung cancer patients, (2) the intensity difference between the original and reconstructed images in a constant keyhole size, and (3) the accuracy of tumor motion and area directly measured by tumor autocontouring. Results: For constant image quality, the dynamic keyhole method, conventional keyhole, and zero-filling methods required 22%, 34%, and 49% of the keyhole size (P < 0.0001), respectively, compared to the full k-space image acquisition method. Compared to the conventional keyhole and zero-filling reconstructed images with the keyhole size utilized in the dynamic keyhole method, an average intensity difference of the dynamic keyhole reconstructed images (P < 0.0001) was minimal, and resulted in the accuracy of tumor motion within 99.6% (P < 0.0001) and the accuracy of tumor area within 98.0% (P < 0.0001) for lung tumor monitoring applications. Conclusions: This study demonstrates that the dynamic keyhole method is a promising technique for clinical applications such as image-guided radiation therapy requiring the MR monitoring of thoracic tumors. Based on the results from this study, the dynamic keyhole method could increase the imaging frequency by up to a factor of five compared with full k-space methods for real-time lung tumor MRI.« less

  10. A Unique Delivery System to Rural Schools: The NMSU-Space Center Microcomputer Van Program.

    ERIC Educational Resources Information Center

    Amodeo, Luiza B.; And Others

    Collaboration between New Mexico State University's College of Education and three other entities has led to the computer experience microvan program, implemented in 1983, a unique system for bringing microcomputers into rural New Mexico K-12 classrooms. The International Space Hall of Fame Foundation provides the van, International Space Center…

  11. Computer-aided assessment of regional abdominal fat with food residue removal in CT.

    PubMed

    Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi

    2013-11-01

    Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. Published by Elsevier Inc.

  12. SU-D-18C-01: A Novel 4D-MRI Technology Based On K-Space Retrospective Sorting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Yin, F; Cai, J

    2014-06-01

    Purpose: Current 4D-MRI techniques lack sufficient temporal/spatial resolution and consistent tumor contrast. To overcome these limitations, this study presents the development and initial evaluation of an entirely new framework of 4D-MRI based on k-space retrospective sorting. Methods: An important challenge of the proposed technique is to determine the number of repeated scans(NR) required to obtain sufficient k-space data for 4D-MRI. To do that, simulations using 29 cancer patients' respiratory profiles were performed to derive the relationship between data acquisition completeness(Cp) and NR, also relationship between NR(Cp=95%) and the following factors: total slice(NS), respiratory phase bin length(Lb), frame rate(fr), resolution(R) andmore » image acquisition starting-phase(P0). To evaluate our technique, a computer simulation study on a 4D digital human phantom (XCAT) were conducted with regular breathing (fr=0.5Hz; R=256×256). A 2D echo planer imaging(EPI) MRI sequence were assumed to acquire raw k-space data, with respiratory signal and acquisition time for each k-space data line recorded simultaneously. K-space data was re-sorted based on respiratory phases. To evaluate 4D-MRI image quality, tumor trajectories were measured and compared with the input signal. Mean relative amplitude difference(D) and cross-correlation coefficient(CC) are calculated. Finally, phase-sharing sliding window technique was applied to investigate the feasibility of generating ultra-fast 4D-MRI. Result: Cp increased with NR(Cp=100*[1-exp(-0.19*NR)], when NS=30, Lb=100%/6). NR(Cp=95%) was inversely-proportional to Lb (r=0.97), but independent of other factors. 4D-MRI on XCAT demonstrated highly accurate motion information (D=0.67%, CC=0.996) with much less artifacts than those on image-based sorting 4D-MRI. Ultra-fast 4D-MRI with an apparent temporal resolution of 10 frames/second was reconstructed using the phase-sharing sliding window technique. Conclusions: A novel 4D-MRI technology based on k-space sorting has been successfully developed and evaluated on the digital phantom. Framework established can be applied to a variety of MR sequences, showing great promises to develop the optimal 4D-MRI technique for many radiation therapy applications. NIH (1R21CA165384-01A1)« less

  13. Fast and accurate computation of projected two-point functions

    NASA Astrophysics Data System (ADS)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithmOur code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  14. Machine learning in APOGEE. Unsupervised spectral classification with K-means

    NASA Astrophysics Data System (ADS)

    Garcia-Dias, Rafael; Allende Prieto, Carlos; Sánchez Almeida, Jorge; Ordovás-Pascual, Ignacio

    2018-05-01

    Context. The volume of data generated by astronomical surveys is growing rapidly. Traditional analysis techniques in spectroscopy either demand intensive human interaction or are computationally expensive. In this scenario, machine learning, and unsupervised clustering algorithms in particular, offer interesting alternatives. The Apache Point Observatory Galactic Evolution Experiment (APOGEE) offers a vast data set of near-infrared stellar spectra, which is perfect for testing such alternatives. Aims: Our research applies an unsupervised classification scheme based on K-means to the massive APOGEE data set. We explore whether the data are amenable to classification into discrete classes. Methods: We apply the K-means algorithm to 153 847 high resolution spectra (R ≈ 22 500). We discuss the main virtues and weaknesses of the algorithm, as well as our choice of parameters. Results: We show that a classification based on normalised spectra captures the variations in stellar atmospheric parameters, chemical abundances, and rotational velocity, among other factors. The algorithm is able to separate the bulge and halo populations, and distinguish dwarfs, sub-giants, RC, and RGB stars. However, a discrete classification in flux space does not result in a neat organisation in the parameters' space. Furthermore, the lack of obvious groups in flux space causes the results to be fairly sensitive to the initialisation, and disrupts the efficiency of commonly-used methods to select the optimal number of clusters. Our classification is publicly available, including extensive online material associated with the APOGEE Data Release 12 (DR12). Conclusions: Our description of the APOGEE database can help greatly with the identification of specific types of targets for various applications. We find a lack of obvious groups in flux space, and identify limitations of the K-means algorithm in dealing with this kind of data. Full Tables B.1-B.4 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/612/A98

  15. Rapid exploration of configuration space with diffusion-map-directed molecular dynamics.

    PubMed

    Zheng, Wenwei; Rohrdanz, Mary A; Clementi, Cecilia

    2013-10-24

    The gap between the time scale of interesting behavior in macromolecular systems and that which our computational resources can afford often limits molecular dynamics (MD) from understanding experimental results and predicting what is inaccessible in experiments. In this paper, we introduce a new sampling scheme, named diffusion-map-directed MD (DM-d-MD), to rapidly explore molecular configuration space. The method uses a diffusion map to guide MD on the fly. DM-d-MD can be combined with other methods to reconstruct the equilibrium free energy, and here, we used umbrella sampling as an example. We present results from two systems: alanine dipeptide and alanine-12. In both systems, we gain tremendous speedup with respect to standard MD both in exploring the configuration space and reconstructing the equilibrium distribution. In particular, we obtain 3 orders of magnitude of speedup over standard MD in the exploration of the configurational space of alanine-12 at 300 K with DM-d-MD. The method is reaction coordinate free and minimally dependent on a priori knowledge of the system. We expect wide applications of DM-d-MD to other macromolecular systems in which equilibrium sampling is not affordable by standard MD.

  16. Rapid Exploration of Configuration Space with Diffusion Map-directed-Molecular Dynamics

    PubMed Central

    Zheng, Wenwei; Rohrdanz, Mary A.; Clementi, Cecilia

    2013-01-01

    The gap between the timescale of interesting behavior in macromolecular systems and that which our computational resources can afford oftentimes limits Molecular Dynamics (MD) from understanding experimental results and predicting what is inaccessible in experiments. In this paper, we introduce a new sampling scheme, named Diffusion Map-directed-MD (DM-d-MD), to rapidly explore molecular configuration space. The method uses diffusion map to guide MD on the fly. DM-d-MD can be combined with other methods to reconstruct the equilibrium free energy, and here we used umbrella sampling as an example. We present results from two systems: alanine dipeptide and alanine-12. In both systems we gain tremendous speedup with respect to standard MD both in exploring the configuration space and reconstructing the equilibrium distribution. In particular, we obtain 3 orders of magnitude of speedup over standard MD in the exploration of the configurational space of alanine-12 at 300K with DM-d-MD. The method is reaction coordinate free and minimally dependent on a priori knowledge of the system. We expect wide applications of DM-d-MD to other macromolecular systems in which equilibrium sampling is not affordable by standard MD. PMID:23865517

  17. The application of contraction theory to an iterative formulation of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Brand, J. C.; Kauffman, J. F.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  18. Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2007-06-01

    In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.

  19. The existence results and Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces

    NASA Astrophysics Data System (ADS)

    Wang, Min

    2017-06-01

    This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.

  20. A Thermal Infrared Radiation Parameterization for Atmospheric Studies

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)

    2001-01-01

    This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.

  1. Color segmentation in the HSI color space using the K-means algorithm

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue component plays in the segmentation of color images.

  2. Space Station 20-kHz power management and distribution system

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.; Sundberg, Gale R.

    1986-01-01

    During the conceptual design phase a 20-kHz power distribution system was selected as the reference for the Space Station. The system is single-phase 400 VRMS, with a sinusoidal wave form. The initial user power level will be 75 kW with growth to 300 kW. The high-frequency system selection was based upon considerations of efficiency, weight, safety, ease of control, interface with computers, and ease of paralleling for growth. Each of these aspects will be discussed as well as the associated trade-offs involved. An advanced development program has been instituted to accelerate the maturation of the high-frequency system. Some technical aspects of the advanced development will be discussed.

  3. Space station 20-kHz power management and distribution system

    NASA Technical Reports Server (NTRS)

    Hansen, I. G.; Sundberg, G. R.

    1986-01-01

    During the conceptual design phase a 20-kHz power distribution system was selected as the reference for the space station. The system is single-phase 400 VRMS, with a sinusoidal wave form. The initial user power level will be 75 kW with growth to 300 kW. The high-frequency system selection was based upon considerations of efficiency, weight, safety, ease of control, interface with computers, and ease of paralleling for growth. Each of these aspects will be discussed as well as the associated trade-offs involved. An advanced development program has been instituted to accelerate the maturation of the high-frequency system. Some technical aspects of the advanced development will be discussed.

  4. Quantifying and correcting motion artifacts in MRI

    NASA Astrophysics Data System (ADS)

    Bones, Philip J.; Maclaren, Julian R.; Millane, Rick P.; Watts, Richard

    2006-08-01

    Patient motion during magnetic resonance imaging (MRI) can produce significant artifacts in a reconstructed image. Since measurements are made in the spatial frequency domain ('k-space'), rigid-body translational motion results in phase errors in the data samples while rotation causes location errors. A method is presented to detect and correct these errors via a modified sampling strategy, thereby achieving more accurate image reconstruction. The strategy involves sampling vertical and horizontal strips alternately in k-space and employs phase correlation within the overlapping segments to estimate translational motion. An extension, also based on correlation, is employed to estimate rotational motion. Results from simulations with computer-generated phantoms suggest that the algorithm is robust up to realistic noise levels. The work is being extended to physical phantoms. Provided that a reference image is available and the object is of limited extent, it is shown that a measure related to the amount of energy outside the support can be used to objectively compare the severity of motion-induced artifacts.

  5. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  6. A 3-D turbulent flow analysis using finite elements with k-ɛ model

    NASA Astrophysics Data System (ADS)

    Okuda, H.; Yagawa, G.; Eguchi, Y.

    1989-03-01

    This paper describes the finite element turbulent flow analysis, which is suitable for three-dimensional large scale problems. The k-ɛ turbulence model as well as the conservation equations of mass and momentum are discretized in space using rather low order elements. Resulting coefficient matrices are evaluated by one-point quadrature in order to reduce the computational storage and the CPU cost. The time integration scheme based on the velocity correction method is employed to obtain steady state solutions. For the verification of this FEM program, two-dimensional plenum flow is simulated and compared with experiment. As the application to three-dimensional practical problems, the turbulent flows in the upper plenum of the fast breeder reactor are calculated for various boundary conditions.

  7. Extracting Communities from Complex Networks by the k-Dense Method

    NASA Astrophysics Data System (ADS)

    Saito, Kazumi; Yamada, Takeshi; Kazama, Kazuhiro

    To understand the structural and functional properties of large-scale complex networks, it is crucial to efficiently extract a set of cohesive subnetworks as communities. There have been proposed several such community extraction methods in the literature, including the classical k-core decomposition method and, more recently, the k-clique based community extraction method. The k-core method, although computationally efficient, is often not powerful enough for uncovering a detailed community structure and it produces only coarse-grained and loosely connected communities. The k-clique method, on the other hand, can extract fine-grained and tightly connected communities but requires a substantial amount of computational load for large-scale complex networks. In this paper, we present a new notion of a subnetwork called k-dense, and propose an efficient algorithm for extracting k-dense communities. We applied our method to the three different types of networks assembled from real data, namely, from blog trackbacks, word associations and Wikipedia references, and demonstrated that the k-dense method could extract communities almost as efficiently as the k-core method, while the qualities of the extracted communities are comparable to those obtained by the k-clique method.

  8. High-Frequency Subband Compressed Sensing MRI Using Quadruplet Sampling

    PubMed Central

    Sung, Kyunghyun; Hargreaves, Brian A

    2013-01-01

    Purpose To presents and validates a new method that formalizes a direct link between k-space and wavelet domains to apply separate undersampling and reconstruction for high- and low-spatial-frequency k-space data. Theory and Methods High- and low-spatial-frequency regions are defined in k-space based on the separation of wavelet subbands, and the conventional compressed sensing (CS) problem is transformed into one of localized k-space estimation. To better exploit wavelet-domain sparsity, CS can be used for high-spatial-frequency regions while parallel imaging can be used for low-spatial-frequency regions. Fourier undersampling is also customized to better accommodate each reconstruction method: random undersampling for CS and regular undersampling for parallel imaging. Results Examples using the proposed method demonstrate successful reconstruction of both low-spatial-frequency content and fine structures in high-resolution 3D breast imaging with a net acceleration of 11 to 12. Conclusion The proposed method improves the reconstruction accuracy of high-spatial-frequency signal content and avoids incoherent artifacts in low-spatial-frequency regions. This new formulation also reduces the reconstruction time due to the smaller problem size. PMID:23280540

  9. Technology for Kids' Desktops: How One School Brought Its Computers Out of the Lab and into Classrooms.

    ERIC Educational Resources Information Center

    Bozzone, Meg A.

    1997-01-01

    Purchasing custom-made desks with durable glass tops to house computers and double as student work space solved the problem of how to squeeze in additional classroom computers at Johnson Park Elementary School in Princeton, New Jersey. This article describes a K-5 grade school's efforts to overcome barriers to integrating technology. (PEN)

  10. Computations of Flow over a Hump Model Using Higher Order Method with Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Balakumar, P.

    2005-01-01

    Turbulent separated flow over a two-dimensional hump is computed by solving the RANS equations with k - omega (SST) turbulence model for the baseline, steady suction and oscillatory blowing/suction flow control cases. The flow equations and the turbulent model equations are solved using a fifth-order accurate weighted essentially. nonoscillatory (WENO) scheme for space discretization and a third order, total variation diminishing (TVD) Runge-Kutta scheme for time integration. Qualitatively the computed pressure distributions exhibit the same behavior as those observed in the experiments. The computed separation regions are much longer than those observed experimentally. However, the percentage reduction in the separation region in the steady suction case is closer to what was measured in the experiment. The computations did not predict the expected reduction in the separation length in the oscillatory case. The predicted turbulent quantities are two to three times smaller than the measured values pointing towards the deficiencies in the existing turbulent models when they are applied to strong steady/unsteady separated flows.

  11. A new method for computing the reliability of consecutive k-out-of-n:F systems

    NASA Astrophysics Data System (ADS)

    Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak

    2016-01-01

    In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  12. Measurement of hydraulic characteristics of porous media used to grow plants in microgravity

    NASA Technical Reports Server (NTRS)

    Steinberg, Susan L.; Poritz, Darwin

    2005-01-01

    Understanding the effect of gravity on hydraulic properties of plant growth medium is essential for growing plants in space. The suitability of existing models to simulate hydraulic properties of porous medium is uncertain due to limited understanding of fundamental mechanisms controlling water and air transport in microgravity. The objective of this research was to characterize saturated and unsaturated hydraulic conductivity (K) of two particle-size distributions of baked ceramic aggregate using direct measurement techniques compatible with microgravity. Steady state (Method A) and instantaneous profile measurement (Method B) methods for K were used in a single experimental unit with horizontal flow through thin sections of porous medium providing an earth-based analog to microgravity. Comparison between methods was conducted using a crossover experimental design compatible with limited resources of space flight. Satiated (natural saturation) K ranged from 0.09 to 0.12 cm s-1 and 0.5 to >1 cm s-1 for 0.25- to 1- and 1- to 2-mm media, respectively. The K at the interaggregate/intraaggregate transition was approximately 10(-4) cm s-1 for both particle-size distributions. Significant differences in log(10)K due to method and porous medium were less than one order of magnitude and were attributed to variability in air entrapment. The van Genuchten/Mualem parametric models provided an adequate prediction of K of the interaggregate pore space, using residual water content for that pore space. The instantaneous profile method covers the range of water contents relevant to plant growth using fewer resources than Method A, all advantages for space flight where mass, volume, and astronaut time are limited.

  13. Nonlinear Cascades of Surface Oceanic Geostrophic Kinetic Energy in the Frequency Domain

    DTIC Science & Technology

    2012-09-01

    kinetic energy in wavenumber k space for surface ocean geostrophic flows have been computed from sat - ellite altimetry data of sea surface height (Scott...5 0.65kN, where kN corresponds to the Nyquist scale. The filter is applied to bq 1 and bq 2 , the Fourier transforms of q1 and q2, at every time step

  14. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data.

    PubMed

    Song, Hongchao; Jiang, Zhuqing; Men, Aidong; Yang, Bo

    2017-01-01

    Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k -nearest neighbor graphs- ( K -NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  15. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    PubMed Central

    Jiang, Zhuqing; Men, Aidong; Yang, Bo

    2017-01-01

    Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k-nearest neighbor graphs- (K-NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity. PMID:29270197

  16. 3-D Electromagnetic field analysis of wireless power transfer system using K computer

    NASA Astrophysics Data System (ADS)

    Kawase, Yoshihiro; Yamaguchi, Tadashi; Murashita, Masaya; Tsukada, Shota; Ota, Tomohiro; Yamamoto, Takeshi

    2018-05-01

    We analyze the electromagnetic field of a wireless power transfer system using the 3-D parallel finite element method on K computer, which is a super computer in Japan. It is clarified that the electromagnetic field of the wireless power transfer system can be analyzed in a practical time using the parallel computation on K computer, moreover, the accuracy of the loss calculation becomes better as the mesh division of the shield becomes fine.

  17. Optimizing pKa computation in proteins with pH adapted conformations.

    PubMed

    Kieseritzky, Gernot; Knapp, Ernst-Walter

    2008-05-15

    pK(A) in proteins are determined by electrostatic energy computations using a small number of optimized protein conformations derived from crystal structures. In these protein conformations hydrogen positions and geometries of salt bridges on the protein surface were determined self-consistently with the protonation pattern at three pHs (low, ambient, and high). Considering salt bridges at protein surfaces is most relevant, since they open at low and high pH. In the absence of these conformational changes, computed pK(A)(comp) of acidic (basic) groups in salt bridges underestimate (overestimate) experimental pK(A)(exp), dramatically. The pK(A)(comp) for 15 different proteins with 185 known pK(A)(exp) yield an RMSD of 1.12, comparable with two other methods. One of these methods is fully empirical with many adjustable parameters. The other is also based on electrostatic energy computations using many non-optimized side chain conformers but employs larger dielectric constants at short distances of charge pairs that diminish their electrostatic interactions. These empirical corrections that account implicitly for additional conformational flexibility were needed to describe the energetics of salt bridges appropriately. This is not needed in the present approach. The RMSD of the present approach improves if one considers only strongly shifted pK(A)(exp) in contrast to the other methods under these conditions. Our method allows interpreting pK(A)(comp) in terms of pH dependent hydrogen bonding pattern and salt bridge geometries. A web service is provided to perform pK(A) computations. 2007 Wiley-Liss, Inc.

  18. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    PubMed

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  19. Evenly spaced Detrended Fluctuation Analysis: Selecting the number of points for the diffusion plot

    NASA Astrophysics Data System (ADS)

    Liddy, Joshua J.; Haddad, Jeffrey M.

    2018-02-01

    Detrended Fluctuation Analysis (DFA) has become a widely-used tool to examine the correlation structure of a time series and provided insights into neuromuscular health and disease states. As the popularity of utilizing DFA in the human behavioral sciences has grown, understanding its limitations and how to properly determine parameters is becoming increasingly important. DFA examines the correlation structure of variability in a time series by computing α, the slope of the log SD- log n diffusion plot. When using the traditional DFA algorithm, the timescales, n, are often selected as a set of integers between a minimum and maximum length based on the number of data points in the time series. This produces non-uniformly distributed values of n in logarithmic scale, which influences the estimation of α due to a disproportionate weighting of the long-timescale regions of the diffusion plot. Recently, the evenly spaced DFA and evenly spaced average DFA algorithms were introduced. Both algorithms compute α by selecting k points for the diffusion plot based on the minimum and maximum timescales of interest and improve the consistency of α estimates for simulated fractional Gaussian noise and fractional Brownian motion time series. Two issues that remain unaddressed are (1) how to select k and (2) whether the evenly-spaced DFA algorithms show similar benefits when assessing human behavioral data. We manipulated k and examined its effects on the accuracy, consistency, and confidence limits of α in simulated and experimental time series. We demonstrate that the accuracy and consistency of α are relatively unaffected by the selection of k. However, the confidence limits of α narrow as k increases, dramatically reducing measurement uncertainty for single trials. We provide guidelines for selecting k and discuss potential uses of the evenly spaced DFA algorithms when assessing human behavioral data.

  20. Development of equations to predict the influence of floor space on average daily gain, average daily feed intake and gain : feed ratio of finishing pigs.

    PubMed

    Flohr, J R; Dritz, S S; Tokach, M D; Woodworth, J C; DeRouchey, J M; Goodband, R D

    2018-05-01

    Floor space allowance for pigs has substantial effects on pig growth and welfare. Data from 30 papers examining the influence of floor space allowance on the growth of finishing pigs was used in a meta-analysis to develop alternative prediction equations for average daily gain (ADG), average daily feed intake (ADFI) and gain : feed ratio (G : F). Treatment means were compiled in a database that contained 30 papers for ADG and 28 papers for ADFI and G : F. The predictor variables evaluated were floor space (m2/pig), k (floor space/final BW0.67), Initial BW, Final BW, feed space (pigs per feeder hole), water space (pigs per waterer), group size (pigs per pen), gender, floor type and study length (d). Multivariable general linear mixed model regression equations were used. Floor space treatments within each experiment were the observational and experimental unit. The optimum equations to predict ADG, ADFI and G : F were: ADG, g=337.57+(16 468×k)-(237 350×k 2)-(3.1209×initial BW (kg))+(2.569×final BW (kg))+(71.6918×k×initial BW (kg)); ADFI, g=833.41+(24 785×k)-(388 998×k 2)-(3.0027×initial BW (kg))+(11.246×final BW (kg))+(187.61×k×initial BW (kg)); G : F=predicted ADG/predicted ADFI. Overall, the meta-analysis indicates that BW is an important predictor of ADG and ADFI even after computing the constant coefficient k, which utilizes final BW in its calculation. This suggests including initial and final BW improves the prediction over using k as a predictor alone. In addition, the analysis also indicated that G : F of finishing pigs is influenced by floor space allowance, whereas individual studies have concluded variable results.

  1. Numerical Simulations and Experimental Measurements of Scale-Model Horizontal Axis Hydrokinetic Turbines (HAHT) Arrays

    NASA Astrophysics Data System (ADS)

    Javaherchi, Teymour; Stelzenmuller, Nick; Seydel, Joseph; Aliseda, Alberto

    2014-11-01

    The performance, turbulent wake evolution and interaction of multiple Horizontal Axis Hydrokinetic Turbines (HAHT) is analyzed in a 45:1 scale model setup. We combine experimental measurements with different RANS-based computational simulations that model the turbines with sliding-mesh, rotating reference frame and blame element theory strategies. The influence of array spacing and Tip Speed Ratio on performance and wake velocity structure is investigated in three different array configurations: Two coaxial turbines at different downstream spacing (5d to 14d), Three coaxial turbines with 5d and 7d downstream spacing, and Three turbines with lateral offset (0.5d) and downstream spacing (5d & 7d). Comparison with experimental measurements provides insights into the dynamics of HAHT arrays, and by extension to closely packed HAWT arrays. The experimental validation process also highlights the influence of the closure model used (k- ω SST and k- ɛ) and the flow Reynolds number (Re=40,000 to 100,000) on the computational predictions of devices' performance and characteristics of the flow field inside the above-mentioned arrays, establishing the strengths and limitations of existing numerical models for use in industrially-relevant settings (computational cost and time). Supported by DOE through the National Northwest Marine Renewable Energy Center (NNMREC).

  2. Nonlinear Optical Response of Polar Semiconductors in the Terahertz Range

    NASA Astrophysics Data System (ADS)

    Roman, Eric; Yates, Jonathan; Veithen, Marek; Vanderbilt, David; Souza, Ivo

    2006-03-01

    Using the Berry-phase finite-field method, we compute from first-principles the recently measured infrared (IR) dispersion of the nonlinear susceptibility (2)circ in III-V zincblende semiconductors. At far-IR (terahertz) frequencies, in addition to the purely electronic response (2)circ∞, the total (2)circ depends on three other parameters, C1, C2, and C3, describing the contributions from ionic motion. They relate to the TO Raman polarizability and the second-order displacement-induced dielectric polarization and forces, respectively. Contrary to a widely-accepted model, but in agreement with the recent experiments on GaAs, ^1 we find that the contribution from mechanical anharmonicity dominates over electrical anharmonicity. By using Richardson extrapolation to evaluate the Berry's phase in k-space by finite differences, we are able to improve the convergence of the nonlinear susceptibility from the usual O[(δk)^2] to O[(δk)^4], dramatically reducing the computational cost. T. Dekorsy, V. A. Yakovlev, W. Seidel, M. Helm, and F. Keilmann, Phys. Rev. Lett. 90, 055508 (2003). C. Flytzanis, Phys. Rev. B 6, 1264 (1972). R. Umari and A. Pasquarello, Phys. Rev. B 68, 085114 (2003).

  3. Optical recognition of statistical patterns

    NASA Astrophysics Data System (ADS)

    Lee, S. H.

    1981-12-01

    Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

  4. Optical recognition of statistical patterns

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1981-01-01

    Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

  5. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function.

    PubMed

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2016-06-01

    MRI-guided interventions demand high frame rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real time to interactively deblur spiral images. Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF-predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF-predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 min of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. This real-time distortion correction framework will enable the use of these high frame rate imaging methods for MRI-guided interventions. Magn Reson Med 75:2278-2285, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  6. Experimentation Using the Mir Station as a Space Laboratory

    DTIC Science & Technology

    1998-01-01

    Institute for Machine Building (TsNIIMASH) Korolev, Moscow Region, Russia V. Teslenko and N. Shvets Energia Space Corporation Korolev, Moscow...N. Shvets Energia Space Corporation Korolev, Moscow Region, Russia J. A. Drakes/ D. G. Swann, and W. K. McGregor* Sverdrup Technology, Inc...and plume computations. Excitation of the plume gas molecular electronic states by solar radiation, geo- corona Lyman-alpha, and electronic impact

  7. Computer Analysis of Electromagnetic Field Exposure Hazard for Space Station Astronauts during Extravehicular Activity

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Kelley, James S.; Panneton, Robert B.; Arndt, G. Dickey

    1995-01-01

    In order to estimate the RF radiation hazards to astronauts and electronics equipment due to various Space Station transmitters, the electric fields around the various Space Station antennas are computed using the rigorous Computational Electromagnetics (CEM) techniques. The Method of Moments (MoM) was applied to the UHF and S-band low gain antennas. The Aperture Integration (AI) method and the Geometrical Theory of Diffraction (GTD) method were used to compute the electric field intensities for the S- and Ku-band high gain antennas. As a result of this study, The regions in which the electric fields exceed the specified exposure levels for the Extravehicular Mobility Unit (EMU) electronics equipment and Extravehicular Activity (EVA) astronaut are identified for various Space Station transmitters.

  8. Fast and asymptotic computation of the fixation probability for Moran processes on graphs.

    PubMed

    Alcalde Cuesta, F; González Sequeiros, P; Lozano Rojo, Á

    2015-03-01

    Evolutionary dynamics has been classically studied for homogeneous populations, but now there is a growing interest in the non-homogeneous case. One of the most important models has been proposed in Lieberman et al. (2005), adapting to a weighted directed graph the process described in Moran (1958). The Markov chain associated with the graph can be modified by erasing all non-trivial loops in its state space, obtaining the so-called Embedded Markov chain (EMC). The fixation probability remains unchanged, but the expected time to absorption (fixation or extinction) is reduced. In this paper, we shall use this idea to compute asymptotically the average fixation probability for complete bipartite graphs K(n,m). To this end, we firstly review some recent results on evolutionary dynamics on graphs trying to clarify some points. We also revisit the 'Star Theorem' proved in Lieberman et al. (2005) for the star graphs K(1,m). Theoretically, EMC techniques allow fast computation of the fixation probability, but in practice this is not always true. Thus, in the last part of the paper, we compare this algorithm with the standard Monte Carlo method for some kind of complex networks. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. K2 and K2*: efficient alignment-free sequence similarity measurement based on Kendall statistics.

    PubMed

    Lin, Jie; Adjeroh, Donald A; Jiang, Bing-Hua; Jiang, Yue

    2018-05-15

    Alignment-free sequence comparison methods can compute the pairwise similarity between a huge number of sequences much faster than sequence-alignment based methods. We propose a new non-parametric alignment-free sequence comparison method, called K2, based on the Kendall statistics. Comparing to the other state-of-the-art alignment-free comparison methods, K2 demonstrates competitive performance in generating the phylogenetic tree, in evaluating functionally related regulatory sequences, and in computing the edit distance (similarity/dissimilarity) between sequences. Furthermore, the K2 approach is much faster than the other methods. An improved method, K2*, is also proposed, which is able to determine the appropriate algorithmic parameter (length) automatically, without first considering different values. Comparative analysis with the state-of-the-art alignment-free sequence similarity methods demonstrates the superiority of the proposed approaches, especially with increasing sequence length, or increasing dataset sizes. The K2 and K2* approaches are implemented in the R language as a package and is freely available for open access (http://community.wvu.edu/daadjeroh/projects/K2/K2_1.0.tar.gz). yueljiang@163.com. Supplementary data are available at Bioinformatics online.

  10. Compilation and development of K-6 aerospace materials for implementation in NASA spacelink electronic information system

    NASA Technical Reports Server (NTRS)

    Blake, Jean A.

    1987-01-01

    Spacelink is an electronic information service to be operated by the Marshall Space Flight Center. It will provide NASA news and educational resources including software programs that can be accessed by anyone with a computer and modem. Spacelink is currently being installed and will soon begin service. It will provide daily updates of NASA programs, information about NASA educational services, manned space flight, unmanned space flight, aeronautics, NASA itself, lesson plans and activities, and space program spinoffs. Lesson plans and activities were extracted from existing NASA publications on aerospace activities for the elementary school. These materials were arranged into 206 documents which have been entered into the Spacelink program for use in grades K-6.

  11. Spectroscopic fingerprints of toroidal nuclear quantum delocalization via ab initio path integral simulations.

    PubMed

    Schütt, Ole; Sebastiani, Daniel

    2013-04-05

    We investigate the quantum-mechanical delocalization of hydrogen in rotational symmetric molecular systems. To this purpose, we perform ab initio path integral molecular dynamics simulations of a methanol molecule to characterize the quantum properties of hydrogen atoms in a representative system by means of their real-space and momentum-space densities. In particular, we compute the spherically averaged momentum distribution n(k) and the pseudoangular momentum distribution n(kθ). We interpret our results by comparing them to path integral samplings of a bare proton in an ideal torus potential. We find that the hydroxyl hydrogen exhibits a toroidal delocalization, which leads to characteristic fingerprints in the line shapes of the momentum distributions. We can describe these specific spectroscopic patterns quantitatively and compute their onset as a function of temperature and potential energy landscape. The delocalization patterns in the projected momentum distribution provide a promising computational tool to address the intriguing phenomenon of quantum delocalization in condensed matter and its spectroscopic characterization. As the momentum distribution n(k) is also accessible through Nuclear Compton Scattering experiments, our results will help to interpret and understand future measurements more thoroughly. Copyright © 2012 Wiley Periodicals, Inc.

  12. A new method in accelerating PROPELLER MRI.

    PubMed

    Li, Bing Keong; D'Arcy, Michael; Weber, Ewald; Crozier, Stuart

    2008-01-01

    In this work, a new method has been proposed to accelerate the PROPELLER MRI operation. The proposed method uses a rotary phased array coil and a new method in acquiring the k-space strips and preparing the complete k-space trajectories data set. It is numerically shown that for a 12 strips PROPELLER MR brain imaging sequence, the operation time can be reduced by four folds, with no apparent loss in the image quality.

  13. Inference in fuzzy rule bases with conflicting evidence

    NASA Technical Reports Server (NTRS)

    Koczy, Laszlo T.

    1992-01-01

    Inference based on fuzzy 'If ... then' rules has played a very important role since when Zadeh proposed the Compositional Rule of Inference and, especially, since the first successful application presented by Mamdani. From the mid-1980's when the 'fuzzy boom' started in Japan, numerous industrial applications appeared, all using simplified techniques because of the high levels of computational complexity. Another feature is that antecedents in the rules are distributed densely in the input space, so the conclusion can be calculated by some weighted combination of the consequents of the matching (fired) rules. The CRI works in the following way: If R is a rule and A* is an observation, the conclusion is computed by B* = R o A* (o stands for the max-min composition). Algorithms implementing this idea directly have an exponential time complexity (maybe the problem is NP-hard) as the rules are relations in X x Y, a k1 x k2 dimensional space, if X is k1, Y is k2 dimensional. The simplified techniques usually decompose the relation into k1 projections in X(sub i) and measure in some way the degree of similarity between observation and antecedent by some parameter of the overlapping. These parameters are aggregated to a single value in (0,1) which is applied as a resulting weight for the given rule. The projections of rules in dimensions Y(sub i) are weighted by these aggregated values and then they are combined in order to obtain a resulting conclusion separately in every dimension. This method is unapplicable with sparse bases as there is no guarantee that an arbitrary observation matches with any of the antecedents. Then, the degree of similarity is 0 and all consequents are weighted by 0. Some considerations for such a situation are summarized in the next sections.

  14. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical modeling. PMID:27044039

  15. Asteroid orbital inversion using uniform phase-space sampling

    NASA Astrophysics Data System (ADS)

    Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.

    2014-07-01

    We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dehghani, M.H.; Research Institute for Astrophysics and Astronomy of Maragha; Khodam-Mohammadi, A.

    First, we construct the Taub-NUT/bolt solutions of (2k+2)-dimensional Einstein-Maxwell gravity, when all the factor spaces of 2k-dimensional base space B have positive curvature. These solutions depend on two extra parameters, other than the mass and the NUT charge. These are electric charge q and electric potential at infinity V. We investigate the existence of Taub-NUT solutions and find that in addition to the two conditions of uncharged NUT solutions, there exist two extra conditions. These two extra conditions come from the regularity of vector potential at r=N and the fact that the horizon at r=N should be the outer horizonmore » of the NUT charged black hole. We find that the NUT solutions in 2k+2 dimensions have no curvature singularity at r=N, when the 2k-dimensional base space is chosen to be CP{sup 2k}. For bolt solutions, there exists an upper limit for the NUT parameter which decreases as the potential parameter increases. Second, we study the thermodynamics of these spacetimes. We compute temperature, entropy, charge, electric potential, action and mass of the black hole solutions, and find that these quantities satisfy the first law of thermodynamics. We perform a stability analysis by computing the heat capacity, and show that the NUT solutions are not thermally stable for even k's, while there exists a stable phase for odd k's, which becomes increasingly narrow with increasing dimensionality and wide with increasing V. We also study the phase behavior of the 4 and 6 dimensional bolt solutions in canonical ensemble and find that these solutions have a stable phase, which becomes smaller as V increases.« less

  17. Two-spoke placement optimization under explicit specific absorption rate and power constraints in parallel transmission at ultra-high field.

    PubMed

    Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas

    2015-06-01

    The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  19. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  20. Hydrogen Ordering in Hexagonal Intermetallic AB5 Type Compounds

    NASA Astrophysics Data System (ADS)

    Sikora, W.; Kuna, A.

    2008-04-01

    Intermetallic compounds AB5 type (A = rare-earth atoms, B = transition metal) are known to store reversibly large amounts of hydrogen and as that are discussed in this work. It was shown that the alloy cycling stability can be significantly improved by employing the so-called non-stoichiometric compounds AB5+x and that is why analysis of change of structure turned out to be interesting. A tendency for ordering of hydrogen atoms is one of the most intriguing problems for the unsaturated hydrides. The symmetry analysis method in the frame of the theory of space group and their representation gives opportunity to find all possible transformations of the parent structure. In this work symmetry analysis method was applied for AB5+x structure type (P6/mmm parent symmetry space group). There were investigated all possible ordering types and accompanying atom displacements in positions 1a, 2c, 3g (fully occupied in stoichiometric compounds AB5), in positions 2e, 6l (where atom B could appear in non-stoichiometric compounds) and also 4h, 6m, 6k, 12n, 12o, which could be partly occupied by hydrogen as a result of hydrides. An analysis was carried out of all possible structures of lower symmetry, following from P6/mmm for we k=(0, 0, 0). Also the way of getting the structure described by the P63mc space group with double cell along the z-axiswe k=(0, 0, 0.5), as it is suggested in the work of Latroche et al. is discussed by the symmetry analysis. The analysis was obtained by computer program MODY. The program calculates the so-called basis vectors of irreducible representations of a given symmetry group, which can be used for calculation of possible ordering modes.

  1. Abstract Interpreters for Free

    NASA Astrophysics Data System (ADS)

    Might, Matthew

    In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.

  2. K-space reconstruction with anisotropic kernel support (KARAOKE) for ultrafast partially parallel imaging

    PubMed Central

    Miao, Jun; Wong, Wilbur C. K.; Narayan, Sreenath; Wilson, David L.

    2011-01-01

    Purpose: Partially parallel imaging (PPI) greatly accelerates MR imaging by using surface coil arrays and under-sampling k-space. However, the reduction factor (R) in PPI is theoretically constrained by the number of coils (NC). A symmetrically shaped kernel is typically used, but this often prevents even the theoretically possible R from being achieved. Here, the authors propose a kernel design method to accelerate PPI faster than R = NC. Methods: K-space data demonstrates an anisotropic pattern that is correlated with the object itself and to the asymmetry of the coil sensitivity profile, which is caused by coil placement and B1 inhomogeneity. From spatial analysis theory, reconstruction of such pattern is best achieved by a signal-dependent anisotropic shape kernel. As a result, the authors propose the use of asymmetric kernels to improve k-space reconstruction. The authors fit a bivariate Gaussian function to the local signal magnitude of each coil, then threshold this function to extract the kernel elements. A perceptual difference model (Case-PDM) was employed to quantitatively evaluate image quality. Results: A MR phantom experiment showed that k-space anisotropy increased as a function of magnetic field strength. The authors tested a K-spAce Reconstruction with AnisOtropic KErnel support (“KARAOKE”) algorithm with both MR phantom and in vivo data sets, and compared the reconstructions to those produced by GRAPPA, a popular PPI reconstruction method. By exploiting k-space anisotropy, KARAOKE was able to better preserve edges, which is particularly useful for cardiac imaging and motion correction, while GRAPPA failed at a high R near or exceeding NC. KARAOKE performed comparably to GRAPPA at low Rs. Conclusions: As a rule of thumb, KARAOKE reconstruction should always be used for higher quality k-space reconstruction, particularly when PPI data is acquired at high Rs and∕or high field strength. PMID:22047378

  3. Extraction of guided wave dispersion curve in isotropic and anisotropic materials by Matrix Pencil method.

    PubMed

    Chang, C Y; Yuan, F G

    2018-05-16

    Guided wave dispersion curves in isotropic and anisotropic materials are extracted automatically from measured data by Matrix Pencil (MP) method investigating through k-t or x-ω domain with a broadband signal. A piezoelectric wafer emits a broadband excitation, linear chirp signal to generate guided waves in the plate. The propagating waves are measured at discrete locations along the lines for one-dimensional laser Doppler vibrometer (1-D LDV). Measurements are first Fourier transformed into either wavenumber-time k-t domain or space-frequency x-ω domain. MP method is then employed to extract the dispersion curves explicitly associated with different wave modes. In addition, the phase and group velocity are deduced by the relations between wavenumbers and frequencies. In this research, the inspections for dispersion relations on an aluminum plate by MP method from k-t or x-ω domain are demonstrated and compared with two-dimensional Fourier transform (2-D FFT). Other experiments on a thicker aluminum plate for higher modes and a composite plate are analyzed by MP method. Extracted relations of composite plate are confirmed by three-dimensional (3-D) theoretical curves computed numerically. The results explain that the MP method not only shows more accuracy for distinguishing the dispersion curves on isotropic material, but also obtains good agreements with theoretical curves on anisotropic and laminated materials. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Efficiently computing exact geodesic loops within finite steps.

    PubMed

    Xin, Shi-Qing; He, Ying; Fu, Chi-Wing

    2012-06-01

    Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.

  5. Computer modeling of gastric parietal cell: significance of canalicular space, gland lumen, and variable canalicular [K+].

    PubMed

    Crothers, James M; Forte, John G; Machen, Terry E

    2016-05-01

    A computer model, constructed for evaluation of integrated functioning of cellular components involved in acid secretion by the gastric parietal cell, has provided new interpretations of older experimental evidence, showing the functional significance of a canalicular space separated from a mucosal bath by a gland lumen and also shedding light on basolateral Cl(-) transport. The model shows 1) changes in levels of parietal cell secretion (with stimulation or H-K-ATPase inhibitors) result mainly from changes in electrochemical driving forces for apical K(+) and Cl(-) efflux, as canalicular [K(+)] ([K(+)]can) increases or decreases with changes in apical H(+)/K(+) exchange rate; 2) H-K-ATPase inhibition in frog gastric mucosa would increase [K(+)]can similarly with low or high mucosal [K(+)], depolarizing apical membrane voltage similarly, so electrogenic H(+) pumping is not indicated by inhibition causing similar increase in transepithelial potential difference (Vt) with 4 and 80 mM mucosal K(+); 3) decreased H(+) secretion during strongly mucosal-positive voltage clamping is consistent with an electroneutral H-K-ATPase being inhibited by greatly decreased [K(+)]can (Michaelis-Menten mechanism); 4) slow initial change ("long time-constant transient") in current or Vt with clamping of Vt or current involves slow change in [K(+)]can; 5) the Na(+)-K(+)-2Cl(-) symporter (NKCC) is likely to have a significant role in Cl(-) influx, despite evidence that it is not necessary for acid secretion; and 6) relative contributions of Cl(-)/HCO3 (-) exchanger (AE2) and NKCC to Cl(-) influx would differ greatly between resting and stimulated states, possibly explaining reported differences in physiological characteristics of stimulated open-circuit Cl(-) secretion (≈H(+)) and resting short-circuit Cl(-) secretion (>H(+)). Copyright © 2016 the American Physiological Society.

  6. Optimal shield mass distribution for space radiation protection

    NASA Technical Reports Server (NTRS)

    Billings, M. P.

    1972-01-01

    Computational methods have been developed and successfully used for determining the optimum distribution of space radiation shielding on geometrically complex space vehicles. These methods have been incorporated in computer program SWORD for dose evaluation in complex geometry, and iteratively calculating the optimum distribution for (minimum) shield mass satisfying multiple acute and protected dose constraints associated with each of several body organs.

  7. An Efficient Method to Detect Mutual Overlap of a Large Set of Unordered Images for Structure-From

    NASA Astrophysics Data System (ADS)

    Wang, X.; Zhan, Z. Q.; Heipke, C.

    2017-05-01

    Recently, low-cost 3D reconstruction based on images has become a popular focus of photogrammetry and computer vision research. Methods which can handle an arbitrary geometric setup of a large number of unordered and convergent images are of particular interest. However, determining the mutual overlap poses a considerable challenge. We propose a new method which was inspired by and improves upon methods employing random k-d forests for this task. Specifically, we first derive features from the images and then a random k-d forest is used to find the nearest neighbours in feature space. Subsequently, the degree of similarity between individual images, the image overlaps and thus images belonging to a common block are calculated as input to a structure-from-motion (sfm) pipeline. In our experiments we show the general applicability of the new method and compare it with other methods by analyzing the time efficiency. Orientations and 3D reconstructions were successfully conducted with our overlap graphs by sfm. The results show a speed-up of a factor of 80 compared to conventional pairwise matching, and of 8 and 2 compared to the VocMatch approach using 1 and 4 CPU, respectively.

  8. Methylation of zebularine: a quantum mechanical study incorporating interactive 3D pdf graphs.

    PubMed

    Selvam, Lalitha; Vasilyev, Vladislav; Wang, Feng

    2009-08-20

    Methylation of a cytidine deaminase inhibitor, 1-(beta-D-ribofuranosyl)-2-pyrimidone (i.e., zebularine (zeb)), which produces 1-(beta-D-ribofuranosyl)-5-methyl-2-pyrimidinone (d5), has been investigated using density functional theory models. The optimized structures of zeb and d5 and the valence orbitals primarily responsible for the methylation in d5 are presented using state-of-the-art interactive (on a computer or online) three-dimensional (3D) graphics in a portable document format (pdf) file, 3D-PDF (http://www.web3d.org/x3d/vrml/ ). The facility to embed 3D molecular structures into pdf documents has been developed jointly at Swinburne University of Technology and the National Computational Infrastructure, the Australian National University. The methyl fragment in the base moiety shows little effect on the sugar puckering but apparently affects anisotropic properties, such as condensed Fukui functions. Binding energy spectra, both valence space and core space, are noticeably affected; in particular, in the outer-valence space (e.g., IP < 20 eV). The methyl fragment delocalizes and diffuses into almost all valence space, but orbitals 8 (57a, IP = 12.57 eV), 18 (47a, IP = 14.70 eV), and 37 (28a, IP = 22.15 eV) are identified as fingerprint for the methyl fragment. In the inner shell, however, the impact of the methyl can be localized and identified by chemical shift. A small, global, red shift is found for the O-K, N-K and sugar C-K spectra, whereas the base C-K spectrum exhibits apparent methyl-related changes.

  9. Geometry of discrete quantum computing

    NASA Astrophysics Data System (ADS)

    Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr; Tai, Yu-Tsung

    2013-05-01

    Conventional quantum computing entails a geometry based on the description of an n-qubit state using 2n infinite precision complex numbers denoting a vector in a Hilbert space. Such numbers are in general uncomputable using any real-world resources, and, if we have the idea of physical law as some kind of computational algorithm of the universe, we would be compelled to alter our descriptions of physics to be consistent with computable numbers. Our purpose here is to examine the geometric implications of using finite fields Fp and finite complexified fields \\mathbf {F}_{p^2} (based on primes p congruent to 3 (mod4)) as the basis for computations in a theory of discrete quantum computing, which would therefore become a computable theory. Because the states of a discrete n-qubit system are in principle enumerable, we are able to determine the proportions of entangled and unentangled states. In particular, we extend the Hopf fibration that defines the irreducible state space of conventional continuous n-qubit theories (which is the complex projective space \\mathbf {CP}^{2^{n}-1}) to an analogous discrete geometry in which the Hopf circle for any n is found to be a discrete set of p + 1 points. The tally of unit-length n-qubit states is given, and reduced via the generalized Hopf fibration to \\mathbf {DCP}^{2^{n}-1}, the discrete analogue of the complex projective space, which has p^{2^{n}-1} (p-1)\\,\\prod _{k=1}^{n-1} ( p^{2^{k}}+1) irreducible states. Using a measure of entanglement, the purity, we explore the entanglement features of discrete quantum states and find that the n-qubit states based on the complexified field \\mathbf {F}_{p^2} have pn(p - 1)n unentangled states (the product of the tally for a single qubit) with purity 1, and they have pn + 1(p - 1)(p + 1)n - 1 maximally entangled states with purity zero.

  10. The explicit computation of integration algorithms and first integrals for ordinary differential equations with polynomials coefficients using trees

    NASA Technical Reports Server (NTRS)

    Crouch, P. E.; Grossman, Robert

    1992-01-01

    This note is concerned with the explicit symbolic computation of expressions involving differential operators and their actions on functions. The derivation of specialized numerical algorithms, the explicit symbolic computation of integrals of motion, and the explicit computation of normal forms for nonlinear systems all require such computations. More precisely, if R = k(x(sub 1),...,x(sub N)), where k = R or C, F denotes a differential operator with coefficients from R, and g member of R, we describe data structures and algorithms for efficiently computing g. The basic idea is to impose a multiplicative structure on the vector space with basis the set of finite rooted trees and whose nodes are labeled with the coefficients of the differential operators. Cancellations of two trees with r + 1 nodes translates into cancellation of O(N(exp r)) expressions involving the coefficient functions and their derivatives.

  11. SU-G-JeP1-15: Sliding Window Prior Data Assisted Compressed Sensing for MRI Lung Tumor Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, E; Wachowicz, K; Rathee, S

    Purpose: Prior Data Assisted Compressed Sensing (PDACS) is a partial k-space acquisition and reconstruction method for mobile tumour (i.e. lung) tracking using on-line MRI in radiotherapy. PDACS partially relies on prior data acquired at the beginning of dynamic scans, and is therefore susceptible to artifacts in longer duration scan due to slow drifts in MR signal. A novel sliding window strategy is presented to mitigate this effect. Methods: MRI acceleration is simulated by retrospective removal of data from the fully sampled sets. Six lung cancer patients were scanned (clinical 3T MRI) using a balanced steady state free precession (bSSFP) sequencemore » for 3 minutes at approximately 4 frames per second, for a total of 650 dynamics. PDACS acceleration is achieved by undersampling of k-space in a single pseudo-random pattern. Reconstruction iteratively minimizes the total variations while constraining the images to satisfy both the currently acquired data and the prior data in missing k-space. Our novel sliding window technique (SW-PDACS), uses a series of distinct pseudo-random under-sampling patterns of partial k-space – with the prior data drawn from a sliding window of the most recent data available. Under-sampled data, simulating 2 – 5x acceleration are reconstructed using PDACS and SW-PDACS. Three quantitative metrics: artifact power, centroid error and Dice’s coefficient are computed for comparison. Results: Quantitively metric values from all 6 patients are averaged in 3 bins, each containing approximately one minute of dynamic data. For the first minute bin, PDACS and SW-PDACS give comparable results. Progressive decline in image quality metrics in bins 2 and 3 are observed for PDACS. No decline in image quality is observed for SW-PDACS. Conclusion: The novel approach presented (SW-PDACS) is a more robust for accelerating longer duration (>1 minute) dynamic MRI scans for tracking lung tumour motion using on-line MRI in radiotherapy. B.G. Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization).« less

  12. Calculation reduction method for color digital holography and computer-generated hologram using color space conversion

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Nagahama, Yuki; Kakue, Takashi; Takada, Naoki; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Hiyama, Daisuke; Ito, Tomoyoshi

    2014-02-01

    A calculation reduction method for color digital holography (DH) and computer-generated holograms (CGHs) using color space conversion is reported. Color DH and color CGHs are generally calculated on RGB space. We calculate color DH and CGHs in other color spaces for accelerating the calculation (e.g., YCbCr color space). In YCbCr color space, a RGB image or RGB hologram is converted to the luminance component (Y), blue-difference chroma (Cb), and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color DH and CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space. The proposed method, which is possible to accelerate the calculations up to a factor of 3 in theory, accelerates the calculation over two times faster than the ones in RGB color space.

  13. The stability of quadratic-reciprocal functional equation

    NASA Astrophysics Data System (ADS)

    Song, Aimin; Song, Minwei

    2018-04-01

    A new quadratic-reciprocal functional equation f ((k +1 )x +k y )+f ((k +1 )x -k y )=2/f (x )f (y )[(k+1 ) 2f (y )+k2f (x )] [(k+1)2f (y )-k2f (x )] 2 is introduced. The Hyers-Ulam stability for the quadratic-reciprocal functional equations is proved in Banach spaces using the direct method and the fixed point method, respectively.

  14. Computational methods and software systems for dynamics and control of large space structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.

    1990-01-01

    This final report on computational methods and software systems for dynamics and control of large space structures covers progress to date, projected developments in the final months of the grant, and conclusions. Pertinent reports and papers that have not appeared in scientific journals (or have not yet appeared in final form) are enclosed. The grant has supported research in two key areas of crucial importance to the computer-based simulation of large space structure. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area, as reported here, involves massively parallel computers.

  15. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2015-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes nearly 150 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies. Remote Sensing; Earth Science Informatics, Data Systems; Data Services; Metadata

  16. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2017-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes over 180 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies.

  17. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2017-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes over 180 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies.The talk will present an overview of current efforts in ESI, the role members of IEEE GRSS play, and discuss recent developments in data preservation and provenance.

  18. 3D sensitivity encoded ellipsoidal MR spectroscopic imaging of gliomas at 3T☆

    PubMed Central

    Ozturk-Isik, Esin; Chen, Albert P.; Crane, Jason C.; Bian, Wei; Xu, Duan; Han, Eric T.; Chang, Susan M.; Vigneron, Daniel B.; Nelson, Sarah J.

    2010-01-01

    Purpose The goal of this study was to implement time efficient data acquisition and reconstruction methods for 3D magnetic resonance spectroscopic imaging (MRSI) of gliomas at a field strength of 3T using parallel imaging techniques. Methods The point spread functions, signal to noise ratio (SNR), spatial resolution, metabolite intensity distributions and Cho:NAA ratio of 3D ellipsoidal, 3D sensitivity encoding (SENSE) and 3D combined ellipsoidal and SENSE (e-SENSE) k-space sampling schemes were compared with conventional k-space data acquisition methods. Results The 3D SENSE and e-SENSE methods resulted in similar spectral patterns as the conventional MRSI methods. The Cho:NAA ratios were highly correlated (P<.05 for SENSE and P<.001 for e-SENSE) with the ellipsoidal method and all methods exhibited significantly different spectral patterns in tumor regions compared to normal appearing white matter. The geometry factors ranged between 1.2 and 1.3 for both the SENSE and e-SENSE spectra. When corrected for these factors and for differences in data acquisition times, the empirical SNRs were similar to values expected based upon theoretical grounds. The effective spatial resolution of the SENSE spectra was estimated to be same as the corresponding fully sampled k-space data, while the spectra acquired with ellipsoidal and e-SENSE k-space samplings were estimated to have a 2.36–2.47-fold loss in spatial resolution due to the differences in their point spread functions. Conclusion The 3D SENSE method retained the same spatial resolution as full k-space sampling but with a 4-fold reduction in scan time and an acquisition time of 9.28 min. The 3D e-SENSE method had a similar spatial resolution as the corresponding ellipsoidal sampling with a scan time of 4:36 min. Both parallel imaging methods provided clinically interpretable spectra with volumetric coverage and adequate SNR for evaluating Cho, Cr and NAA. PMID:19766422

  19. Removal of nuisance signals from limited and sparse 1H MRSI data using a union-of-subspaces model.

    PubMed

    Ma, Chao; Lam, Fan; Johnson, Curtis L; Liang, Zhi-Pei

    2016-02-01

    To remove nuisance signals (e.g., water and lipid signals) for (1) H MRSI data collected from the brain with limited and/or sparse (k, t)-space coverage. A union-of-subspace model is proposed for removing nuisance signals. The model exploits the partial separability of both the nuisance signals and the metabolite signal, and decomposes an MRSI dataset into several sets of generalized voxels that share the same spectral distributions. This model enables the estimation of the nuisance signals from an MRSI dataset that has limited and/or sparse (k, t)-space coverage. The proposed method has been evaluated using in vivo MRSI data. For conventional chemical shift imaging data with limited k-space coverage, the proposed method produced "lipid-free" spectra without lipid suppression during data acquisition at 130 ms echo time. For sparse (k, t)-space data acquired with conventional pulses for water and lipid suppression, the proposed method was also able to remove the remaining water and lipid signals with negligible residuals. Nuisance signals in (1) H MRSI data reside in low-dimensional subspaces. This property can be utilized for estimation and removal of nuisance signals from (1) H MRSI data even when they have limited and/or sparse coverage of (k, t)-space. The proposed method should prove useful especially for accelerated high-resolution (1) H MRSI of the brain. © 2015 Wiley Periodicals, Inc.

  20. WE-G-17A-01: Improving Tracking Image Spatial Resolution for Onboard MR Image Guided Radiation Therapy Using the WHISKEE Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Y; Mutic, S; Du, D

    Purpose: To evaluate the feasibility of using the weighted hybrid iterative spiral k-space encoded estimation (WHISKEE) technique to improve spatial resolution of tracking images for onboard MR image guided radiation therapy (MR-IGRT). Methods: MR tracking images of abdomen and pelvis had been acquired from healthy volunteers using the ViewRay onboard MRIGRT system (ViewRay Inc. Oakwood Village, OH) at a spatial resolution of 2.0mm*2.0mm*5.0mm. The tracking MR images were acquired using the TrueFISP sequence. The temporal resolution had to be traded off to 2 frames per second (FPS) to achieve the 2.0mm in-plane spatial resolution. All MR images were imported intomore » the MATLAB software. K-space data were synthesized through the Fourier Transform of the MR images. A mask was created to selected k-space points that corresponded to the under-sampled spiral k-space trajectory with an acceleration (or undersampling) factor of 3. The mask was applied to the fully sampled k-space data to synthesize the undersampled k-space data. The WHISKEE method was applied to the synthesized undersampled k-space data to reconstructed tracking MR images at 6 FPS. As a comparison, the undersampled k-space data were also reconstructed using the zero-padding technique. The reconstructed images were compared to the original image. The relatively reconstruction error was evaluated using the percentage of the norm of the differential image over the norm of the original image. Results: Compared to the zero-padding technique, the WHISKEE method was able to reconstruct MR images with better image quality. It significantly reduced the relative reconstruction error from 39.5% to 3.1% for the pelvis image and from 41.5% to 4.6% for the abdomen image at an acceleration factor of 3. Conclusion: We demonstrated that it was possible to use the WHISKEE method to expedite MR image acquisition for onboard MR-IGRT systems to achieve good spatial and temporal resolutions simultaneously. Y. Hu and O. green receive travel reimbursement from ViewRay. S. Mutic has consulting and research agreements with ViewRay. Q. Zeng, R. Nana, J.L. Patrick, S. Shvartsman and J.F. Dempsey are ViewRay employees.« less

  1. Direct magnetic field estimation based on echo planar raw data.

    PubMed

    Testud, Frederik; Splitthoff, Daniel Nicolas; Speck, Oliver; Hennig, Jürgen; Zaitsev, Maxim

    2010-07-01

    Gradient recalled echo echo planar imaging is widely used in functional magnetic resonance imaging. The fast data acquisition is, however, very sensitive to field inhomogeneities which manifest themselves as artifacts in the images. Typically used correction methods have the common deficit that the data for the correction are acquired only once at the beginning of the experiment, assuming the field inhomogeneity distribution B(0) does not change over the course of the experiment. In this paper, methods to extract the magnetic field distribution from the acquired k-space data or from the reconstructed phase image of a gradient echo planar sequence are compared and extended. A common derivation for the presented approaches provides a solid theoretical basis, enables a fair comparison and demonstrates the equivalence of the k-space and the image phase based approaches. The image phase analysis is extended here to calculate the local gradient in the readout direction and improvements are introduced to the echo shift analysis, referred to here as "k-space filtering analysis." The described methods are compared to experimentally acquired B(0) maps in phantoms and in vivo. The k-space filtering analysis presented in this work demonstrated to be the most sensitive method to detect field inhomogeneities.

  2. Computational and Experimental Study of Thermodynamics of the Reaction of Titania and Water at High Temperatures.

    PubMed

    Nguyen, Q N; Bauschlicher, C W; Myers, D L; Jacobson, N S; Opila, E J

    2017-12-14

    Gaseous titanium hydroxide and oxyhydroxide species were studied with quantum chemical methods. The results are used in conjunction with an experimental transpiration study of titanium dioxide (TiO 2 ) in water vapor-containing environments at elevated temperatures to provide a thermodynamic description of the Ti(OH) 4 (g) and TiO(OH) 2 (g) species. The geometry and harmonic vibrational frequencies of these species were computed using the coupled-cluster singles and doubles method with a perturbative correction for connected triple substitutions [CCSD(T)]. For the OH bending and rotation, the B3LYP density functional theory was used to compute corrections to the harmonic approximations. These results were combined to determine the enthalpy of formation. Experimentally, the transpiration method was used with water contents from 0 to 76 mol % in oxygen or argon carrier gases for 20-250 h exposure times at 1473-1673 K. Results indicate that oxygen is not a key contributor to volatilization, and the primary reaction for volatilization in this temperature range is TiO 2 (s) + H 2 O(g) = TiO(OH) 2 (g). Data were analyzed with both the second and third law methods using the thermal functions derived from the theoretical calculations. The third law enthalpy of formation at 298.15 K for TiO(OH) 2 (g) at 298 K was -838.9 ± 6.5 kJ/mol, which compares favorably to the theoretical calculation of -838.7 ± 25 kJ/mol. We recommend the experimentally derived third law enthalpy of formation at 298.15 K for TiO(OH) 2 , the computed entropy of 320.67 J/mol·K, and the computed heat capacity [149.192 + (-0.02539)T + (8.28697 × 10 -6 )T 2 + (-15614.05)/T + (-5.2182 × 10 -11 )/T 2 ] J/mol-K, where T is the temperature in K.

  3. Analysis, Mining and Visualization Service at NCSA

    NASA Astrophysics Data System (ADS)

    Wilhelmson, R.; Cox, D.; Welge, M.

    2004-12-01

    NCSA's goal is to create a balanced system that fully supports high-end computing as well as: 1) high-end data management and analysis; 2) visualization of massive, highly complex data collections; 3) large databases; 4) geographically distributed Grid computing; and 5) collaboratories, all based on a secure computational environment and driven with workflow-based services. To this end NCSA has defined a new technology path that includes the integration and provision of cyberservices in support of data analysis, mining, and visualization. NCSA has begun to develop and apply a data mining system-NCSA Data-to-Knowledge (D2K)-in conjunction with both the application and research communities. NCSA D2K will enable the formation of model-based application workflows and visual programming interfaces for rapid data analysis. The Java-based D2K framework, which integrates analytical data mining methods with data management, data transformation, and information visualization tools, will be configurable from the cyberservices (web and grid services, tools, ..) viewpoint to solve a wide range of important data mining problems. This effort will use modules, such as a new classification methods for the detection of high-risk geoscience events, and existing D2K data management, machine learning, and information visualization modules. A D2K cyberservices interface will be developed to seamlessly connect client applications with remote back-end D2K servers, providing computational resources for data mining and integration with local or remote data stores. This work is being coordinated with SDSC's data and services efforts. The new NCSA Visualization embedded workflow environment (NVIEW) will be integrated with D2K functionality to tightly couple informatics and scientific visualization with the data analysis and management services. Visualization services will access and filter disparate data sources, simplifying tasks such as fusing related data from distinct sources into a coherent visual representation. This approach enables collaboration among geographically dispersed researchers via portals and front-end clients, and the coupling with data management services enables recording associations among datasets and building annotation systems into visualization tools and portals, giving scientists a persistent, shareable, virtual lab notebook. To facilitate provision of these cyberservices to the national community, NCSA will be providing a computational environment for large-scale data assimilation, analysis, mining, and visualization. This will be initially implemented on the new 512 processor shared memory SGI's recently purchased by NCSA. In addition to standard batch capabilities, NCSA will provide on-demand capabilities for those projects requiring rapid response (e.g., development of severe weather, earthquake events) for decision makers. It will also be used for non-sequential interactive analysis of data sets where it is important have access to large data volumes over space and time.

  4. Optimal experimental designs for estimating Henry's law constants via the method of phase ratio variation.

    PubMed

    Kapelner, Adam; Krieger, Abba; Blanford, William J

    2016-10-14

    When measuring Henry's law constants (k H ) using the phase ratio variation (PRV) method via headspace gas chromatography (G C ), the value of k H of the compound under investigation is calculated from the ratio of the slope to the intercept of a linear regression of the inverse G C response versus the ratio of gas to liquid volumes of a series of vials drawn from the same parent solution. Thus, an experimenter collects measurements consisting of the independent variable (the gas/liquid volume ratio) and dependent variable (the G C -1 peak area). A review of the literature found that the common design is a simple uniform spacing of liquid volumes. We present an optimal experimental design which estimates k H with minimum error and provides multiple means for building confidence intervals for such estimates. We illustrate performance improvements of our design with an example measuring the k H for Naphthalene in aqueous solution as well as simulations on previous studies. Our designs are most applicable after a trial run defines the linear G C response and the linear phase ratio to the G C -1 region (where the PRV method is suitable) after which a practitioner can collect measurements in bulk. The designs can be easily computed using our open source software optDesignSlopeInt, an R package on CRAN. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. First-principles method for calculating the rate constants of internal-conversion and intersystem-crossing transitions.

    PubMed

    Valiev, R R; Cherepanov, V N; Baryshnikov, G V; Sundholm, D

    2018-02-28

    A method for calculating the rate constants for internal-conversion (k IC ) and intersystem-crossing (k ISC ) processes within the adiabatic and Franck-Condon (FC) approximations is proposed. The applicability of the method is demonstrated by calculation of k IC and k ISC for a set of organic and organometallic compounds with experimentally known spectroscopic properties. The studied molecules were pyrromethene-567 dye, psoralene, hetero[8]circulenes, free-base porphyrin, naphthalene, and larger polyacenes. We also studied fac-Alq 3 and fac-Ir(ppy) 3 , which are important molecules in organic light emitting diodes (OLEDs). The excitation energies were calculated at the multi-configuration quasi-degenerate second-order perturbation theory (XMC-QDPT2) level, which is found to yield excitation energies in good agreement with experimental data. Spin-orbit coupling matrix elements, non-adiabatic coupling matrix elements, Huang-Rhys factors, and vibrational energies were calculated at the time-dependent density functional theory (TDDFT) and complete active space self-consistent field (CASSCF) levels. The computed fluorescence quantum yields for the pyrromethene-567 dye, psoralene, hetero[8]circulenes, fac-Alq 3 and fac-Ir(ppy) 3 agree well with experimental data, whereas for the free-base porphyrin, naphthalene, and the polyacenes, the obtained quantum yields significantly differ from the experimental values, because the FC and adiabatic approximations are not accurate for these molecules.

  6. A Computational Fluid Dynamic and Heat Transfer Model for Gaseous Core and Gas Cooled Space Power and Propulsion Reactors

    NASA Technical Reports Server (NTRS)

    Anghaie, S.; Chen, G.

    1996-01-01

    A computational model based on the axisymmetric, thin-layer Navier-Stokes equations is developed to predict the convective, radiation and conductive heat transfer in high temperature space nuclear reactors. An implicit-explicit, finite volume, MacCormack method in conjunction with the Gauss-Seidel line iteration procedure is utilized to solve the thermal and fluid governing equations. Simulation of coolant and propellant flows in these reactors involves the subsonic and supersonic flows of hydrogen, helium and uranium tetrafluoride under variable boundary conditions. An enthalpy-rebalancing scheme is developed and implemented to enhance and accelerate the rate of convergence when a wall heat flux boundary condition is used. The model also incorporated the Baldwin and Lomax two-layer algebraic turbulence scheme for the calculation of the turbulent kinetic energy and eddy diffusivity of energy. The Rosseland diffusion approximation is used to simulate the radiative energy transfer in the optically thick environment of gas core reactors. The computational model is benchmarked with experimental data on flow separation angle and drag force acting on a suspended sphere in a cylindrical tube. The heat transfer is validated by comparing the computed results with the standard heat transfer correlations predictions. The model is used to simulate flow and heat transfer under a variety of design conditions. The effect of internal heat generation on the heat transfer in the gas core reactors is examined for a variety of power densities, 100 W/cc, 500 W/cc and 1000 W/cc. The maximum temperature, corresponding with the heat generation rates, are 2150 K, 2750 K and 3550 K, respectively. This analysis shows that the maximum temperature is strongly dependent on the value of heat generation rate. It also indicates that a heat generation rate higher than 1000 W/cc is necessary to maintain the gas temperature at about 3500 K, which is typical design temperature required to achieve high efficiency in the gas core reactors. The model is also used to predict the convective and radiation heat fluxes for the gas core reactors. The maximum value of heat flux occurs at the exit of the reactor core. Radiation heat flux increases with higher wall temperature. This behavior is due to the fact that the radiative heat flux is strongly dependent on wall temperature. This study also found that at temperature close to 3500 K the radiative heat flux is comparable with the convective heat flux in a uranium fluoride failed gas core reactor.

  7. Clumpak: a program for identifying clustering modes and packaging population structure inferences across K.

    PubMed

    Kopelman, Naama M; Mayzel, Jonathan; Jakobsson, Mattias; Rosenberg, Noah A; Mayrose, Itay

    2015-09-01

    The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present Clumpak (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, Clumpak identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software Clumpp. Next, Clumpak identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in Clumpp and simplifying the comparison of clustering results across different K values. Clumpak incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology. © 2015 John Wiley & Sons Ltd.

  8. Human-computer interface

    DOEpatents

    Anderson, Thomas G.

    2004-12-21

    The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

  9. Parallel simulation of tsunami inundation on a large-scale supercomputer

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.

  10. Parametric analysis of hollow conductor parallel and coaxial transmission lines for high frequency space power distribution

    NASA Technical Reports Server (NTRS)

    Jeffries, K. S.; Renz, D. D.

    1984-01-01

    A parametric analysis was performed of transmission cables for transmitting electrical power at high voltage (up to 1000 V) and high frequency (10 to 30 kHz) for high power (100 kW or more) space missions. Large diameter (5 to 30 mm) hollow conductors were considered in closely spaced coaxial configurations and in parallel lines. Formulas were derived to calculate inductance and resistance for these conductors. Curves of cable conductance, mass, inductance, capacitance, resistance, power loss, and temperature were plotted for various conductor diameters, conductor thickness, and alternating current frequencies. An example 5 mm diameter coaxial cable with 0.5 mm conductor thickness was calculated to transmit 100 kW at 1000 Vac, 50 m with a power loss of 1900 W, an inductance of 1.45 micron and a capacitance of 0.07 micron-F. The computer programs written for this analysis are listed in the appendix.

  11. Computing Interactions Of Free-Space Radiation With Matter

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.; Townsend, L. W.; Badavi, F. F.; Tripathi, R. K.; Silberberg, R.; Tsao, C. H.; Badwar, G. D.

    1995-01-01

    High Charge and Energy Transport (HZETRN) computer program computationally efficient, user-friendly package of software adressing problem of transport of, and shielding against, radiation in free space. Designed as "black box" for design engineers not concerned with physics of underlying atomic and nuclear radiation processes in free-space environment, but rather primarily interested in obtaining fast and accurate dosimetric information for design and construction of modules and devices for use in free space. Computational efficiency achieved by unique algorithm based on deterministic approach to solution of Boltzmann equation rather than computationally intensive statistical Monte Carlo method. Written in FORTRAN.

  12. Development of X-TOOLSS: Preliminary Design of Space Systems Using Evolutionary Computation

    NASA Technical Reports Server (NTRS)

    Schnell, Andrew R.; Hull, Patrick V.; Turner, Mike L.; Dozier, Gerry; Alverson, Lauren; Garrett, Aaron; Reneau, Jarred

    2008-01-01

    Evolutionary computational (EC) techniques such as genetic algorithms (GA) have been identified as promising methods to explore the design space of mechanical and electrical systems at the earliest stages of design. In this paper the authors summarize their research in the use of evolutionary computation to develop preliminary designs for various space systems. An evolutionary computational solver developed over the course of the research, X-TOOLSS (Exploration Toolset for the Optimization of Launch and Space Systems) is discussed. With the success of early, low-fidelity example problems, an outline of work involving more computationally complex models is discussed.

  13. Behavior-Based Fault Monitoring

    DTIC Science & Technology

    1990-12-03

    processor targeted for avionics and space applications . It appears that the signature monitoring technique can be extended to detect computer viruses as...most common approach is structural duplication. Although effective, duplication is too expensive for all but a few applications . Redundancy can also be...Signature Monitoring and Encryption," Int. Conf. on Dependable Computing for Critical Applications , August 1989. 7. K.D. Wilken and J.P. Shen

  14. Modeling, Analysis, and Optimization Issues for Large Space Structures.

    DTIC Science & Technology

    1983-02-01

    There are numerous opportunities - provided by new advances in computer hardware, firmware, software , CAD/CAM systems, computational algorithms and...Institute Department of Mechanical Engineering Dept. of Civil Engineering & Mechanics Troy, NY 12181 Drexel University Philadelphia, PA 19104 Dr...Mechanical Engineering Hampton, VA 23665 Washington, DC 20059 Dr. K. T. Alfriend Mr. Siva S. Banda Department of the Navy Flight Dynamics LaboratoryNaval

  15. Research on Extension of Sparql Ontology Query Language Considering the Computation of Indoor Spatial Relations

    NASA Astrophysics Data System (ADS)

    Li, C.; Zhu, X.; Guo, W.; Liu, Y.; Huang, H.

    2015-05-01

    A method suitable for indoor complex semantic query considering the computation of indoor spatial relations is provided According to the characteristics of indoor space. This paper designs ontology model describing the space related information of humans, events and Indoor space objects (e.g. Storey and Room) as well as their relations to meet the indoor semantic query. The ontology concepts are used in IndoorSPARQL query language which extends SPARQL syntax for representing and querying indoor space. And four types specific primitives for indoor query, "Adjacent", "Opposite", "Vertical" and "Contain", are defined as query functions in IndoorSPARQL used to support quantitative spatial computations. Also a method is proposed to analysis the query language. Finally this paper adopts this method to realize indoor semantic query on the study area through constructing the ontology model for the study building. The experimental results show that the method proposed in this paper can effectively support complex indoor space semantic query.

  16. Pulsed laser photolysis and quantum chemical-statistical rate study of the reaction of the ethynyl radical with water vapor

    NASA Astrophysics Data System (ADS)

    Carl, Shaun A.; Minh Thi Nguyen, Hue; Elsamra, Rehab M. I.; Tho Nguyen, Minh; Peeters, Jozef

    2005-03-01

    The rate coefficient of the gas-phase reaction C2H+H2O→products has been experimentally determined over the temperature range 500-825K using a pulsed laser photolysis-chemiluminescence (PLP-CL) technique. Ethynyl radicals (C2H) were generated by pulsed 193nm photolysis of C2H2 in the presence of H2O vapor and buffer gas N2 at 15Torr. The relative concentration of C2H radicals was monitored as a function of time using a CH * chemiluminescence method. The rate constant determinations for C2H+H2O were k1(550K)=(2.3±1.3)×10-13cm3s-1, k1(770cm3s-1, and k1(825cm3s-1. The error in the only other measurement of this rate constant is also discussed. We have also characterized the reaction theoretically using quantum chemical computations. The relevant portion of the potential energy surface of C2H3O in its doublet electronic ground state has been investigated using density functional theory B3LYP /6-311++G(3df,2p) and molecular orbital computations at the unrestricted coupled-cluster level of theory that incorporates all single and double excitations plus perturbative corrections for the triple excitations, along with the 6-311++G(3df,2p) basis set [(U)CCSD(T)/6-311++G(3df,2p)] and using UCCSD(T )/6-31G(d,p) optimized geometries. Five isomers, six dissociation products, and sixteen transition structures were characterized. The results confirm that the hydrogen abstraction producing C2H2+OH is the most facile reaction channel. For this channel, refined computations using (U)CCSD(T)/6-311++G(3df,2p)//(U)CCSD(T)/6-311++G(d,p) and complete-active-space second-order perturbation theory/complete-active-space self-consistent-field theory (CASPT2/CASSCF) [B. O. Roos, Adv. Chem. Phys. 69, 399 (1987)] using the contracted atomic natural orbitals basis set (ANO-L) [J. Almlöf and P. R. Taylor, J. Chem. Phys.86, 4070 (1987)] were performed, yielding zero-point energy-corrected potential energy barriers of 17kJmol-1 and 15kJmol-1, respectively. Transition-state theory rate constant calculations, based on the UCCSD(T) and CASPT2/CASSCF computations that also include H-atom tunneling and a hindered internal rotation, are in perfect agreement with the experimental values. Considering both our experimental and theoretical determinations, the rate constant can best be expressed, in modified Arrhenius form as k1(T)=(2.2±0.1)×10-21T3.05exp[-(376±100)/T]cm3s-1 for the range 300-2000K. Thus, at temperatures above 1500K, reaction of C2H with H2O is predicted to be one of the dominant C2H reactions in hydrocarbon combustion.

  17. Recognizing surgeon's actions during suture operations from video sequences

    NASA Astrophysics Data System (ADS)

    Li, Ye; Ohya, Jun; Chiba, Toshio; Xu, Rong; Yamashita, Hiromasa

    2014-03-01

    Because of the shortage of nurses in the world, the realization of a robotic nurse that can support surgeries autonomously is very important. More specifically, the robotic nurse should be able to autonomously recognize different situations of surgeries so that the robotic nurse can pass necessary surgical tools to the medical doctors in a timely manner. This paper proposes and explores methods that can classify suture and tying actions during suture operations from the video sequence that observes the surgery scene that includes the surgeon's hands. First, the proposed method uses skin pixel detection and foreground extraction to detect the hand area. Then, interest points are randomly chosen from the hand area so that their 3D SIFT descriptors are computed. A word vocabulary is built by applying hierarchical K-means to these descriptors, and the words' frequency histogram, which corresponds to the feature space, is computed. Finally, to classify the actions, either SVM (Support Vector Machine), Nearest Neighbor rule (NN) for the feature space or a method that combines "sliding window" with NN is performed. We collect 53 suture videos and 53 tying videos to build the training set and to test the proposed method experimentally. It turns out that the NN gives higher than 90% accuracies, which are better recognition than SVM. Negative actions, which are different from either suture or tying action, are recognized with quite good accuracies, while "Sliding window" did not show significant improvements for suture and tying and cannot recognize negative actions.

  18. IRFK2D: a computer program for simulating intrinsic random functions of order k

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Dowd, Peter A.

    2003-07-01

    IRFK2D is an ANSI Fortran-77 program that generates realizations of an intrinsic function of order k (with k equal to 0, 1 or 2) with a permissible polynomial generalized covariance model. The realizations may be non-conditional or conditioned to the experimental data. The turning bands method is used to generate realizations in 2D and 3D from simulations of an intrinsic random function of order k along lines that span the 2D or 3D space. The program generates two output files, the first containing the simulated values and the second containing the theoretical generalized variogram for different directions together with the theoretical model. The experimental variogram is calculated from the simulated values while the theoretical variogram is the specified generalized covariance model. The generalized variogram is used to assess the quality of the simulation as measured by the extent to which the generalized covariance is reproduced by the simulation. The examples given in this paper indicate that IRFK2D is an efficient implementation of the methodology.

  19. Quantum-secret-sharing scheme based on local distinguishability of orthogonal multiqudit entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Jingtao; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-02-01

    In this study, we propose the concept of judgment space to investigate the quantum-secret-sharing scheme based on local distinguishability (called LOCC-QSS). Because of the proposing of this conception, the property of orthogonal mutiqudit entangled states under restricted local operation and classical communication (LOCC) can be described more clearly. According to these properties, we reveal that, in the previous (k ,n )-threshold LOCC-QSS scheme, there are two required conditions for the selected quantum states to resist the unambiguous attack: (i) their k -level judgment spaces are orthogonal, and (ii) their (k -1 )-level judgment spaces are equal. Practically, if k

  20. TFSSRA - THICK FREQUENCY SELECTIVE SURFACE WITH RECTANGULAR APERTURES

    NASA Technical Reports Server (NTRS)

    Chen, J. C.

    1994-01-01

    Thick Frequency Selective Surface with Rectangular Apertures (TFSSRA) was developed to calculate the scattering parameters for a thick frequency selective surface with rectangular apertures on a skew grid at oblique angle of incidence. The method of moments is used to transform the integral equation into a matrix equation suitable for evaluation on a digital computer. TFSSRA predicts the reflection and transmission characteristics of a thick frequency selective surface for both TE and TM orthogonal linearly polarized plane waves. A model of a half-space infinite array is used in the analysis. A complete set of basis functions with unknown coefficients is developed for the waveguide region (waveguide modes) and for the free space region (Floquet modes) in order to represent the electromagnetic fields. To ensure the convergence of the solutions, the number of waveguide modes is adjustable. The method of moments is used to compute the unknown mode coefficients. Then, the scattering matrix of the half-space infinite array is calculated. Next, the reference plane of the scattering matrix is moved half a plate thickness in the negative z-direction, and a frequency selective surface of finite thickness is synthesized by positioning two plates of half-thickness back-to-back. The total scattering matrix is obtained by cascading the scattering matrices of the two half-space infinite arrays. TFSSRA is written in FORTRAN 77 with single precision. It has been successfully implemented on a Sun4 series computer running SunOS, an IBM PC compatible running MS-DOS, and a CRAY series computer running UNICOS, and should run on other systems with slight modifications. Double precision is recommended for running on a PC if many modes are used or if high accuracy is required. This package requires the LINPACK math library, which is included. TFSSRA requires 1Mb of RAM for execution. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. It is also available on a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. This program was developed in 1992 and is a copyrighted work with all copyright vested in NASA.

  1. Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Youngsoo; Carlberg, Kevin Thomas

    Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less

  2. An interactive program for pharmacokinetic modeling.

    PubMed

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  3. Accelerating the reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning using CUDA.

    PubMed

    Jiansen Li; Jianqi Sun; Ying Song; Yanran Xu; Jun Zhao

    2014-01-01

    An effective way to improve the data acquisition speed of magnetic resonance imaging (MRI) is using under-sampled k-space data, and dictionary learning method can be used to maintain the reconstruction quality. Three-dimensional dictionary trains the atoms in dictionary in the form of blocks, which can utilize the spatial correlation among slices. Dual-dictionary learning method includes a low-resolution dictionary and a high-resolution dictionary, for sparse coding and image updating respectively. However, the amount of data is huge for three-dimensional reconstruction, especially when the number of slices is large. Thus, the procedure is time-consuming. In this paper, we first utilize the NVIDIA Corporation's compute unified device architecture (CUDA) programming model to design the parallel algorithms on graphics processing unit (GPU) to accelerate the reconstruction procedure. The main optimizations operate in the dictionary learning algorithm and the image updating part, such as the orthogonal matching pursuit (OMP) algorithm and the k-singular value decomposition (K-SVD) algorithm. Then we develop another version of CUDA code with algorithmic optimization. Experimental results show that more than 324 times of speedup is achieved compared with the CPU-only codes when the number of MRI slices is 24.

  4. Full-potential multiple scattering theory with space-filling cells for bound and continuum states.

    PubMed

    Hatada, Keisuke; Hayakawa, Kuniko; Benfatto, Maurizio; Natoli, Calogero R

    2010-05-12

    We present a rigorous derivation of a real-space full-potential multiple scattering theory (FP-MST) that is free from the drawbacks that up to now have impaired its development (in particular the need to expand cell shape functions in spherical harmonics and rectangular matrices), valid both for continuum and bound states, under conditions for space partitioning that are not excessively restrictive and easily implemented. In this connection we give a new scheme to generate local basis functions for the truncated potential cells that is simple, fast, efficient, valid for any shape of the cell and reduces to the minimum the number of spherical harmonics in the expansion of the scattering wavefunction. The method also avoids the need for saturating 'internal sums' due to the re-expansion of the spherical Hankel functions around another point in space (usually another cell center). Thus this approach provides a straightforward extension of MST in the muffin-tin (MT) approximation, with only one truncation parameter given by the classical relation l(max) = kR(b), where k is the electron wavevector (either in the excited or ground state of the system under consideration) and R(b) is the radius of the bounding sphere of the scattering cell. Moreover, the scattering path operator of the theory can be found in terms of an absolutely convergent procedure in the l(max) --> ∞ limit. Consequently, this feature provides a firm ground for the use of FP-MST as a viable method for electronic structure calculations and makes possible the computation of x-ray spectroscopies, notably photo-electron diffraction, absorption and anomalous scattering among others, with the ease and versatility of the corresponding MT theory. Some numerical applications of the theory are presented, both for continuum and bound states.

  5. Harmonic maps of S into a complex Grassmann manifold.

    PubMed

    Chern, S S; Wolfson, J

    1985-04-01

    Let G(k, n) be the Grassmann manifold of all C(k) in C(n), the complex spaces of dimensions k and n, respectively, or, what is the same, the manifold of all projective spaces P(k-1) in P(n-1), so that G(1, n) is the complex projective space P(n-1) itself. We study harmonic maps of the two-dimensional sphere S(2) into G(k, n). The case k = 1 has been the subject of investigation by several authors [see, for example, Din, A. M. & Zakrzewski, W. J. (1980) Nucl. Phys. B 174, 397-406; Eells, J. & Wood, J. C. (1983) Adv. Math. 49, 217-263; and Wolfson, J. G. Trans. Am. Math. Soc., in press]. The harmonic maps S(2) --> G(2, 4) have been studied by Ramanathan [Ramanathan, J. (1984) J. Differ. Geom. 19, 207-219]. We shall describe all harmonic maps S(2) --> G(2, n). The method is based on several geometrical constructions, which lead from a given harmonic map to new harmonic maps, in which the image projective spaces are related by "fundamental collineations." The key result is the degeneracy of some fundamental collineations, which is a global consequence, following from the fact that the domain manifold is S(2). The method extends to G(k, n).

  6. Atmospheric effect in three-space scenario for the Stokes-Helmert method of geoid determination

    NASA Astrophysics Data System (ADS)

    Yang, H.; Tenzer, R.; Vanicek, P.; Santos, M.

    2004-05-01

    : According to the Stokes-Helmert method for the geoid determination by Vanicek and Martinec (1994) and Vanicek et al. (1999), the Helmert gravity anomalies are computed at the earth surface. To formulate the fundamental formula of physical geodesy, Helmert's gravity anomalies are then downward continued from the earth surface onto the geoid. This procedure, i.e., the inverse Dirichlet's boundary value problem, is realized by solving the Poisson integral equation. The above mentioned "classical" approach can be modified so that the inverse Dirichlet's boundary value problem is solved in the No Topography (NT) space (Vanicek et al., 2004) instead of in the Helmert (H) space. This technique has been introduced by Vanicek et al. (2003) and was used by Tenzer and Vanicek (2003) for the determination of the geoid in the region of the Canadian Rocky Mountains. According to this new approach, the gravity anomalies referred to the earth surface are first transformed into the NT-space. This transformation is realized by subtracting the gravitational attraction of topographical and atmospheric masses from the gravity anomalies at the earth surface. Since the NT-anomalies are harmonic above the geoid, the Dirichlet boundary value problem is solved in the NT-space instead of the Helmert space according to the standard formulation. After being obtained on the geoid, the NT-anomalies are transformed into the H-space to minimize the indirect effect on the geoidal heights. This step, i.e., transformation from NT-space to H-space is realized by adding the gravitational attraction of condensed topographical and condensed atmospheric masses to the NT-anomalies at the geoid. The effects of atmosphere in the standard Stokes-Helmert method was intensively investigated by Sjöberg (1998 and 1999), and Novák (2000). In this presentation, the effect of the atmosphere in the three-space scenario for the Stokes-Helmert method is discussed and the numerical results over Canada are shown. Key words: Atmosphere - Geoid - Gravity

  7. The preconditioned Gauss-Seidel method faster than the SOR method

    NASA Astrophysics Data System (ADS)

    Niki, Hiroshi; Kohno, Toshiyuki; Morimoto, Munenori

    2008-09-01

    In recent years, a number of preconditioners have been applied to linear systems [A.D. Gunawardena, S.K. Jain, L. Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl. 154-156 (1991) 123-143; T. Kohno, H. Kotakemori, H. Niki, M. Usui, Improving modified Gauss-Seidel method for Z-matrices, Linear Algebra Appl. 267 (1997) 113-123; H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner (I+Smax), J. Comput. Appl. Math. 145 (2002) 373-378; H. Kotakemori, H. Niki, N. Okamoto, Accelerated iteration method for Z-matrices, J. Comput. Appl. Math. 75 (1996) 87-97; M. Usui, H. Niki, T.Kohno, Adaptive Gauss-Seidel method for linear systems, Internat. J. Comput. Math. 51(1994)119-125 [10

  8. Computational methods and software systems for dynamics and control of large space structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.

    1990-01-01

    Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers.

  9. Towards a theory of automated elliptic mesh generation

    NASA Technical Reports Server (NTRS)

    Cordova, J. Q.

    1992-01-01

    The theory of elliptic mesh generation is reviewed and the fundamental problem of constructing computational space is discussed. It is argued that the construction of computational space is an NP-Complete problem and therefore requires a nonstandard approach for its solution. This leads to the development of graph-theoretic, combinatorial optimization and integer programming algorithms. Methods for the construction of two dimensional computational space are presented.

  10. Spiral Imaging in fMRI

    PubMed Central

    Glover, Gary H.

    2011-01-01

    T2*-weighted Blood Oxygen Level Dependent (BOLD) functional magnetic resonance imaging (fMRI) requires efficient acquisition methods in order to fully sample the brain in a several second time period. The most widely used approach is Echo Planar Imaging (EPI), which utilizes a Cartesian trajectory to cover k-space. This trajectory is subject to ghosts from off-resonance and gradient imperfections and is intrinsically sensitive to cardiac-induced pulsatile motion from substantial first- and higher order moments of the gradient waveform near the k-space origin. In addition, only the readout direction gradient contributes significant energy to the trajectory. By contrast, the Spiral method samples k-space with an Archimedean or similar trajectory that begins at the k-space center and spirals to the edge (Spiral-out), or its reverse, ending at the origin (Spiral-in). Spiral methods have reduced sensitivity to motion, shorter readout times, improved signal recovery in most frontal and parietal brain regions, and exhibit blurring artifacts instead of ghosts or geometric distortion. Methods combining Spiral-in and Spiral-out trajectories have further advantages in terms of diminished susceptibility-induced signal dropout and increased BOLD signal. In measurements of temporal signal to noise ratio measured in 8 subjects, Spiral-in/out exhibited significant increases over EPI in voxel volumes recovered in frontal and whole brain regions (18% and 10%, respectively). PMID:22036995

  11. Two pass method and radiation interchange processing when applied to thermal-structural analysis of large space truss structures

    NASA Technical Reports Server (NTRS)

    Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.; Rogers, Karen M.

    1993-01-01

    A method of efficient and automated thermal-structural processing of very large space structures is presented. The method interfaces the finite element and finite difference techniques. It also results in a pronounced reduction of the quantity of computations, computer resources and manpower required for the task, while assuring the desired accuracy of the results.

  12. A sub-space greedy search method for efficient Bayesian Network inference.

    PubMed

    Zhang, Qing; Cao, Yong; Li, Yong; Zhu, Yanming; Sun, Samuel S M; Guo, Dianjing

    2011-09-01

    Bayesian network (BN) has been successfully used to infer the regulatory relationships of genes from microarray dataset. However, one major limitation of BN approach is the computational cost because the calculation time grows more than exponentially with the dimension of the dataset. In this paper, we propose a sub-space greedy search method for efficient Bayesian Network inference. Particularly, this method limits the greedy search space by only selecting gene pairs with higher partial correlation coefficients. Using both synthetic and real data, we demonstrate that the proposed method achieved comparable results with standard greedy search method yet saved ∼50% of the computational time. We believe that sub-space search method can be widely used for efficient BN inference in systems biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Surrogate Structures for Computationally Expensive Optimization Problems With CPU-Time Correlated Functions

    DTIC Science & Technology

    2007-06-01

    xc)−∇2g(x̃c)](x− xc). The second transformation is a space mapping function P that handles the change in variable dimensions (see Bandler et al. [11...17(2):188–217, 2004. 11. Bandler, J. W., Q. Cheng, S. Dakroury, A. S. Mohamed, M.H. Bakr, K. Madsen, J. Søndergaard. “ Space Mapping : The State of

  14. A clustering-based graph Laplacian framework for value function approximation in reinforcement learning.

    PubMed

    Xu, Xin; Huang, Zhenhua; Graves, Daniel; Pedrycz, Witold

    2014-12-01

    In order to deal with the sequential decision problems with large or continuous state spaces, feature representation and function approximation have been a major research topic in reinforcement learning (RL). In this paper, a clustering-based graph Laplacian framework is presented for feature representation and value function approximation (VFA) in RL. By making use of clustering-based techniques, that is, K-means clustering or fuzzy C-means clustering, a graph Laplacian is constructed by subsampling in Markov decision processes (MDPs) with continuous state spaces. The basis functions for VFA can be automatically generated from spectral analysis of the graph Laplacian. The clustering-based graph Laplacian is integrated with a class of approximation policy iteration algorithms called representation policy iteration (RPI) for RL in MDPs with continuous state spaces. Simulation and experimental results show that, compared with previous RPI methods, the proposed approach needs fewer sample points to compute an efficient set of basis functions and the learning control performance can be improved for a variety of parameter settings.

  15. Parameter estimation methods for gene circuit modeling from time-series mRNA data: a comparative study.

    PubMed

    Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin

    2015-11-01

    Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  16. KSC-99pp1227

    NASA Image and Video Library

    1999-10-06

    Children at Audubon Elementary School, Merritt Island, Fla., eagerly unwrap computer equipment donated by Kennedy Space Center. Audubon is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  17. Rapid high performance liquid chromatography method development with high prediction accuracy, using 5cm long narrow bore columns packed with sub-2microm particles and Design Space computer modeling.

    PubMed

    Fekete, Szabolcs; Fekete, Jeno; Molnár, Imre; Ganzler, Katalin

    2009-11-06

    Many different strategies of reversed phase high performance liquid chromatographic (RP-HPLC) method development are used today. This paper describes a strategy for the systematic development of ultrahigh-pressure liquid chromatographic (UHPLC or UPLC) methods using 5cmx2.1mm columns packed with sub-2microm particles and computer simulation (DryLab((R)) package). Data for the accuracy of computer modeling in the Design Space under ultrahigh-pressure conditions are reported. An acceptable accuracy for these predictions of the computer models is presented. This work illustrates a method development strategy, focusing on time reduction up to a factor 3-5, compared to the conventional HPLC method development and exhibits parts of the Design Space elaboration as requested by the FDA and ICH Q8R1. Furthermore this paper demonstrates the accuracy of retention time prediction at elevated pressure (enhanced flow-rate) and shows that the computer-assisted simulation can be applied with sufficient precision for UHPLC applications (p>400bar). Examples of fast and effective method development in pharmaceutical analysis, both for gradient and isocratic separations are presented.

  18. A SLAM II simulation model for analyzing space station mission processing requirements

    NASA Technical Reports Server (NTRS)

    Linton, D. G.

    1985-01-01

    Space station mission processing is modeled via the SLAM 2 simulation language on an IBM 4381 mainframe and an IBM PC microcomputer with 620K RAM, two double-sided disk drives and an 8087 coprocessor chip. Using a time phased mission (payload) schedule and parameters associated with the mission, orbiter (space shuttle) and ground facility databases, estimates for ground facility utilization are computed. Simulation output associated with the science and applications database is used to assess alternative mission schedules.

  19. Space Station UCS antenna pattern computation and measurement. [UHF Communication Subsystem

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Lu, Ba P.; Johnson, Larry A.; Fournet, Jon S.; Panneton, Robert J.; Ngo, John D.; Eggers, Donald S.; Arndt, G. D.

    1993-01-01

    The purpose of this paper is to analyze the interference to the Space Station Ultrahigh Frequency (UHF) Communication Subsystem (UCS) antenna radiation pattern due to its environment - Space Station. A hybrid Computational Electromagnetics (CEM) technique was applied in this study. The antenna was modeled using the Method of Moments (MOM) and the radiation patterns were computed using the Uniform Geometrical Theory of Diffraction (GTD) in which the effects of the reflected and diffracted fields from surfaces, edges, and vertices of the Space Station structures were included. In order to validate the CEM techniques, and to provide confidence in the computer-generated results, a comparison with experimental measurements was made for a 1/15 scale Space Station mockup. Based on the results accomplished, good agreement on experimental and computed results was obtained. The computed results using the CEM techniques for the Space Station UCS antenna pattern predictions have been validated.

  20. The matter power spectrum in redshift space using effective field theory

    NASA Astrophysics Data System (ADS)

    Fonseca de la Bella, Lucía; Regan, Donough; Seery, David; Hotchkiss, Shaun

    2017-11-01

    The use of Eulerian 'standard perturbation theory' to describe mass assembly in the early universe has traditionally been limited to modes with k lesssim 0.1 h/Mpc at z=0. At larger k the SPT power spectrum deviates from measurements made using N-body simulations. Recently, there has been progress in extending the reach of perturbation theory to larger k using ideas borrowed from effective field theory. We revisit the computation of the redshift-space matter power spectrum within this framework, including for the first time the full one-loop time dependence. We use a resummation scheme proposed by Vlah et al. to account for damping of baryonic acoustic oscillations due to large-scale random motions and show that this has a significant effect on the multipole power spectra. We renormalize by comparison to a suite of custom N-body simulations matching the MultiDark MDR1 cosmology. At z=0 and for scales k lesssim 0.4 h/Mpc we find that the EFT furnishes a description of the real-space power spectrum up to ~ 2%, for the l = 0 mode up to ~ 5%, and for the l = 2, 4 modes up to ~ 25%. We argue that, in the MDR1 cosmology, positivity of the l=0 mode gives a firm upper limit of k ≈ 0.74 h/Mpc for the validity of the one-loop EFT prediction in redshift space using only the lowest-order counterterm. We show that replacing the one-loop growth factors by their Einstein-de Sitter counterparts is a good approximation for the l=0 mode, but can induce deviations as large as 2% for the l=2, 4 modes. An accompanying software bundle, distributed under open source licenses, includes Mathematica notebooks describing the calculation, together with parallel pipelines capable of computing both the necessary one-loop SPT integrals and the effective field theory counterterms.

  1. Application of CHAD hydrodynamics to shock-wave problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, H.E.; O`Rourke, P.J.; Sahota, M.S.

    1997-12-31

    CHAD is the latest in a sequence of continually evolving computer codes written to effectively utilize massively parallel computer architectures and the latest grid generators for unstructured meshes. Its applications range from automotive design issues such as in-cylinder and manifold flows of internal combustion engines, vehicle aerodynamics, underhood cooling and passenger compartment heating, ventilation, and air conditioning to shock hydrodynamics and materials modeling. CHAD solves the full unsteady Navier-Stoke equations with the k-epsilon turbulence model in three space dimensions. The code has four major features that distinguish it from the earlier KIVA code, also developed at Los Alamos. First, itmore » is based on a node-centered, finite-volume method in which, like finite element methods, all fluid variables are located at computational nodes. The computational mesh efficiently and accurately handles all element shapes ranging from tetrahedra to hexahedra. Second, it is written in standard Fortran 90 and relies on automatic domain decomposition and a universal communication library written in standard C and MPI for unstructured grids to effectively exploit distributed-memory parallel architectures. Thus the code is fully portable to a variety of computing platforms such as uniprocessor workstations, symmetric multiprocessors, clusters of workstations, and massively parallel platforms. Third, CHAD utilizes a variable explicit/implicit upwind method for convection that improves computational efficiency in flows that have large velocity Courant number variations due to velocity of mesh size variations. Fourth, CHAD is designed to also simulate shock hydrodynamics involving multimaterial anisotropic behavior under high shear. The authors will discuss CHAD capabilities and show several sample calculations showing the strengths and weaknesses of CHAD.« less

  2. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.

  3. Methods for computing color anaglyphs

    NASA Astrophysics Data System (ADS)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  4. Random ensemble learning for EEG classification.

    PubMed

    Hosseini, Mohammad-Parsa; Pompili, Dario; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2018-01-01

    Real-time detection of seizure activity in epilepsy patients is critical in averting seizure activity and improving patients' quality of life. Accurate evaluation, presurgical assessment, seizure prevention, and emergency alerts all depend on the rapid detection of seizure onset. A new method of feature selection and classification for rapid and precise seizure detection is discussed wherein informative components of electroencephalogram (EEG)-derived data are extracted and an automatic method is presented using infinite independent component analysis (I-ICA) to select independent features. The feature space is divided into subspaces via random selection and multichannel support vector machines (SVMs) are used to classify these subspaces. The result of each classifier is then combined by majority voting to establish the final output. In addition, a random subspace ensemble using a combination of SVM, multilayer perceptron (MLP) neural network and an extended k-nearest neighbors (k-NN), called extended nearest neighbor (ENN), is developed for the EEG and electrocorticography (ECoG) big data problem. To evaluate the solution, a benchmark ECoG of eight patients with temporal and extratemporal epilepsy was implemented in a distributed computing framework as a multitier cloud-computing architecture. Using leave-one-out cross-validation, the accuracy, sensitivity, specificity, and both false positive and false negative ratios of the proposed method were found to be 0.97, 0.98, 0.96, 0.04, and 0.02, respectively. Application of the solution to cases under investigation with ECoG has also been effected to demonstrate its utility. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Decomposed direct matrix inversion for fast non-cartesian SENSE reconstructions.

    PubMed

    Qian, Yongxian; Zhang, Zhenghui; Wang, Yi; Boada, Fernando E

    2006-08-01

    A new k-space direct matrix inversion (DMI) method is proposed here to accelerate non-Cartesian SENSE reconstructions. In this method a global k-space matrix equation is established on basic MRI principles, and the inverse of the global encoding matrix is found from a set of local matrix equations by taking advantage of the small extension of k-space coil maps. The DMI algorithm's efficiency is achieved by reloading the precalculated global inverse when the coil maps and trajectories remain unchanged, such as in dynamic studies. Phantom and human subject experiments were performed on a 1.5T scanner with a standard four-channel phased-array cardiac coil. Interleaved spiral trajectories were used to collect fully sampled and undersampled 3D raw data. The equivalence of the global k-space matrix equation to its image-space version, was verified via conjugate gradient (CG) iterative algorithms on a 2x undersampled phantom and numerical-model data sets. When applied to the 2x undersampled phantom and human-subject raw data, the decomposed DMI method produced images with small errors (< or = 3.9%) relative to the reference images obtained from the fully-sampled data, at a rate of 2 s per slice (excluding 4 min for precalculating the global inverse at an image size of 256 x 256). The DMI method may be useful for noise evaluations in parallel coil designs, dynamic MRI, and 3D sodium MRI with fixed coils and trajectories. Copyright 2006 Wiley-Liss, Inc.

  6. Calbindins decreased after space flight

    NASA Technical Reports Server (NTRS)

    Sergeev, I. N.; Rhoten, W. B.; Carney, M. D.

    1996-01-01

    Exposure of the body to microgravity during space flight causes a series of well-documented changes in Ca2+ metabolism, yet the cellular and molecular mechanisms leading to these changes are poorly understood. Calbindins, vitamin D-dependent Ca2+ binding proteins, are believed to have a significant role in maintaining cellular Ca2+ homeostasis. In this study, we used biochemical and immunocytochemical approaches to analyze the expression of calbindin-D28k and calbindin-D9k in kidneys, small intestine, and pancreas of rats flown for 9 d aboard the space shuttle. The effects of microgravity on calbindins in rats from space were compared with synchronous Animal Enclosure Module controls, modeled weightlessness animals (tail suspension), and their controls. Exposure to microgravity resulted in a significant and sustained decrease in calbindin-D28k content in the kidney and calbindin-D9k in the small intestine of flight animals, as measured by enzyme-linked immunosorbent assay (ELISA). Modeled weightlessness animals exhibited a similar decrease in calbindins by ELISA. Immunocytochemistry (ICC) in combination with quantitative computer image analysis was used to measure in situ the expression of calbindins in the kidney and the small intestine, and the expression of insulin in pancreas. There was a large decrease of immunoreactivity in renal distal tubular cell-associated calbindin-D28k and in intestinal absorptive cell-associated calbindin-D9k of space flight and modeled weightlessness animals compared with matched controls. No consistent difference in pancreatic insulin immunoreactivity between space flight, modeled weightlessness, and controls was observed. Regression analysis of results obtained by quantitative ICC and ELISA for space flight, modeled weightlessness animals, and their controls demonstrated a significant correlation. These findings after a short-term exposure to microgravity or modeled weightlessness suggest that a decreased expression of calbindins may contribute to the disorders of Ca2+ metabolism induced by space flight.

  7. A recursive method for calculating the total number of spanning trees and its applications in self-similar small-world scale-free network models

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Su, Jing; Yao, Bing

    2018-05-01

    The problem of determining and calculating the number of spanning trees of any finite graph (model) is a great challenge, and has been studied in various fields, such as discrete applied mathematics, theoretical computer science, physics, chemistry and the like. In this paper, firstly, thank to lots of real-life systems and artificial networks built by all kinds of functions and combinations among some simpler and smaller elements (components), we discuss some helpful network-operation, including link-operation and merge-operation, to design more realistic and complicated network models. Secondly, we present a method for computing the total number of spanning trees. As an accessible example, we apply this method to space of trees and cycles respectively, and our results suggest that it is indeed a better one for such models. In order to reflect more widely practical applications and potentially theoretical significance, we study the enumerating method in some existing scale-free network models. On the other hand, we set up a class of new models displaying scale-free feature, that is to say, following P(k) k-γ, where γ is the degree exponent. Based on detailed calculation, the degree exponent γ of our deterministic scale-free models satisfies γ > 3. In the rest of our discussions, we not only calculate analytically the solutions of average path length, which indicates our models have small-world property being prevailing in amounts of complex systems, but also derive the number of spanning trees by means of the recursive method described in this paper, which clarifies our method is convenient to research these models.

  8. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., 'practice' using a computer keyboard, part of equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  9. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., look with curiosity at the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  10. Audubon Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Audubon Elementary School, Merritt Island, Fla., eagerly unwrap computer equipment donated by Kennedy Space Center. Audubon is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year- long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  11. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., eagerly tear into the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year- long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  12. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., excitedly tear into the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  13. A "Stepping Stone" Approach for Obtaining Quantum Free Energies of Hydration.

    PubMed

    Sampson, Chris; Fox, Thomas; Tautermann, Christofer S; Woods, Christopher; Skylaris, Chris-Kriton

    2015-06-11

    We present a method which uses DFT (quantum, QM) calculations to improve free energies of binding computed with classical force fields (classical, MM). To overcome the incomplete overlap of configurational spaces between MM and QM, we use a hybrid Monte Carlo approach to generate quickly correct ensembles of structures of intermediate states between a MM and a QM/MM description, hence taking into account a great fraction of the electronic polarization of the quantum system, while being able to use thermodynamic integration to compute the free energy of transition between the MM and QM/MM. Then, we perform a final transition from QM/MM to full QM using a one-step free energy perturbation approach. By using QM/MM as a stepping stone toward the full QM description, we find very small convergence errors (<1 kJ/mol) in the transition to full QM. We apply this method to compute hydration free energies, and we obtain consistent improvements over the MM values for all molecules we used in this study. This approach requires large-scale DFT calculations as the full QM systems involved the ligands and all waters in their simulation cells, so the linear-scaling DFT code ONETEP was used for these calculations.

  14. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  15. Efficient computation of k-Nearest Neighbour Graphs for large high-dimensional data sets on GPU clusters.

    PubMed

    Dashti, Ali; Komarov, Ivan; D'Souza, Roshan M

    2013-01-01

    This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG) construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs) and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU). The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.

  16. Mapping the Space of Genomic Signatures

    PubMed Central

    Kari, Lila; Hill, Kathleen A.; Sayem, Abu S.; Karamichalis, Rallis; Bryans, Nathaniel; Davis, Katelyn; Dattani, Nikesh S.

    2015-01-01

    We propose a computational method to measure and visualize interrelationships among any number of DNA sequences allowing, for example, the examination of hundreds or thousands of complete mitochondrial genomes. An "image distance" is computed for each pair of graphical representations of DNA sequences, and the distances are visualized as a Molecular Distance Map: Each point on the map represents a DNA sequence, and the spatial proximity between any two points reflects the degree of structural similarity between the corresponding sequences. The graphical representation of DNA sequences utilized, Chaos Game Representation (CGR), is genome- and species-specific and can thus act as a genomic signature. Consequently, Molecular Distance Maps could inform species identification, taxonomic classifications and, to a certain extent, evolutionary history. The image distance employed, Structural Dissimilarity Index (DSSIM), implicitly compares the occurrences of oligomers of length up to k (herein k = 9) in DNA sequences. We computed DSSIM distances for more than 5 million pairs of complete mitochondrial genomes, and used Multi-Dimensional Scaling (MDS) to obtain Molecular Distance Maps that visually display the sequence relatedness in various subsets, at different taxonomic levels. This general-purpose method does not require DNA sequence alignment and can thus be used to compare similar or vastly different DNA sequences, genomic or computer-generated, of the same or different lengths. We illustrate potential uses of this approach by applying it to several taxonomic subsets: phylum Vertebrata, (super)kingdom Protista, classes Amphibia-Insecta-Mammalia, class Amphibia, and order Primates. This analysis of an extensive dataset confirms that the oligomer composition of full mtDNA sequences can be a source of taxonomic information. This method also correctly finds the mtDNA sequences most closely related to that of the anatomically modern human (the Neanderthal, the Denisovan, and the chimp), and that the sequence most different from it in this dataset belongs to a cucumber. PMID:26000734

  17. Computer-Based Radiographic Quantification of Joint Space Narrowing Progression Using Sequential Hand Radiographs: Validation Study in Rheumatoid Arthritis Patients from Multiple Institutions.

    PubMed

    Ichikawa, Shota; Kamishima, Tamotsu; Sutherland, Kenneth; Fukae, Jun; Katayama, Kou; Aoki, Yuko; Okubo, Takanobu; Okino, Taichi; Kaneda, Takahiko; Takagi, Satoshi; Tanimura, Kazuhide

    2017-10-01

    We have developed a refined computer-based method to detect joint space narrowing (JSN) progression with the joint space narrowing progression index (JSNPI) by superimposing sequential hand radiographs. The purpose of this study is to assess the validity of a computer-based method using images obtained from multiple institutions in rheumatoid arthritis (RA) patients. Sequential hand radiographs of 42 patients (37 females and 5 males) with RA from two institutions were analyzed by a computer-based method and visual scoring systems as a standard of reference. The JSNPI above the smallest detectable difference (SDD) defined JSN progression on the joint level. The sensitivity and specificity of the computer-based method for JSN progression was calculated using the SDD and a receiver operating characteristic (ROC) curve. Out of 314 metacarpophalangeal joints, 34 joints progressed based on the SDD, while 11 joints widened. Twenty-one joints progressed in the computer-based method, 11 joints in the scoring systems, and 13 joints in both methods. Based on the SDD, we found lower sensitivity and higher specificity with 54.2 and 92.8%, respectively. At the most discriminant cutoff point according to the ROC curve, the sensitivity and specificity was 70.8 and 81.7%, respectively. The proposed computer-based method provides quantitative measurement of JSN progression using sequential hand radiographs and may be a useful tool in follow-up assessment of joint damage in RA patients.

  18. K-space reconstruction with anisotropic kernel support (KARAOKE) for ultrafast partially parallel imaging.

    PubMed

    Miao, Jun; Wong, Wilbur C K; Narayan, Sreenath; Wilson, David L

    2011-11-01

    Partially parallel imaging (PPI) greatly accelerates MR imaging by using surface coil arrays and under-sampling k-space. However, the reduction factor (R) in PPI is theoretically constrained by the number of coils (N(C)). A symmetrically shaped kernel is typically used, but this often prevents even the theoretically possible R from being achieved. Here, the authors propose a kernel design method to accelerate PPI faster than R = N(C). K-space data demonstrates an anisotropic pattern that is correlated with the object itself and to the asymmetry of the coil sensitivity profile, which is caused by coil placement and B(1) inhomogeneity. From spatial analysis theory, reconstruction of such pattern is best achieved by a signal-dependent anisotropic shape kernel. As a result, the authors propose the use of asymmetric kernels to improve k-space reconstruction. The authors fit a bivariate Gaussian function to the local signal magnitude of each coil, then threshold this function to extract the kernel elements. A perceptual difference model (Case-PDM) was employed to quantitatively evaluate image quality. A MR phantom experiment showed that k-space anisotropy increased as a function of magnetic field strength. The authors tested a K-spAce Reconstruction with AnisOtropic KErnel support ("KARAOKE") algorithm with both MR phantom and in vivo data sets, and compared the reconstructions to those produced by GRAPPA, a popular PPI reconstruction method. By exploiting k-space anisotropy, KARAOKE was able to better preserve edges, which is particularly useful for cardiac imaging and motion correction, while GRAPPA failed at a high R near or exceeding N(C). KARAOKE performed comparably to GRAPPA at low Rs. As a rule of thumb, KARAOKE reconstruction should always be used for higher quality k-space reconstruction, particularly when PPI data is acquired at high Rs and/or high field strength.

  19. POCS-enhanced correction of motion artifacts in parallel MRI.

    PubMed

    Samsonov, Alexey A; Velikina, Julia; Jung, Youngkyoo; Kholmovski, Eugene G; Johnson, Chris R; Block, Walter F

    2010-04-01

    A new method for correction of MRI motion artifacts induced by corrupted k-space data, acquired by multiple receiver coils such as phased arrays, is presented. In our approach, a projections onto convex sets (POCS)-based method for reconstruction of sensitivity encoded MRI data (POCSENSE) is employed to identify corrupted k-space samples. After the erroneous data are discarded from the dataset, the artifact-free images are restored from the remaining data using coil sensitivity profiles. The error detection and data restoration are based on informational redundancy of phased-array data and may be applied to full and reduced datasets. An important advantage of the new POCS-based method is that, in addition to multicoil data redundancy, it can use a priori known properties about the imaged object for improved MR image artifact correction. The use of such information was shown to improve significantly k-space error detection and image artifact correction. The method was validated on data corrupted by simulated and real motion such as head motion and pulsatile flow.

  20. Cloud Computing Techniques for Space Mission Design

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  1. Viewing ISS Data in Real Time via the Internet

    NASA Technical Reports Server (NTRS)

    Myers, Gerry; Chamberlain, Jim

    2004-01-01

    EZStream is a computer program that enables authorized users at diverse terrestrial locations to view, in real time, data generated by scientific payloads aboard the International Space Station (ISS). The only computation/communication resource needed for use of EZStream is a computer equipped with standard Web-browser software and a connection to the Internet. EZStream runs in conjunction with the TReK software, described in a prior NASA Tech Briefs article, that coordinates multiple streams of data for the ground communication system of the ISS. EZStream includes server components that interact with TReK within the ISS ground communication system and client components that reside in the users' remote computers. Once an authorized client has logged in, a server component of EZStream pulls the requested data from a TReK application-program interface and sends the data to the client. Future EZStream enhancements will include (1) extensions that enable the server to receive and process arbitrary data streams on its own and (2) a Web-based graphical-user-interface-building subprogram that enables a client who lacks programming expertise to create customized display Web pages.

  2. Self-calibrated correlation imaging with k-space variant correlation functions.

    PubMed

    Li, Yu; Edalati, Masoud; Du, Xingfu; Wang, Hui; Cao, Jie J

    2018-03-01

    Correlation imaging is a previously developed high-speed MRI framework that converts parallel imaging reconstruction into the estimate of correlation functions. The presented work aims to demonstrate this framework can provide a speed gain over parallel imaging by estimating k-space variant correlation functions. Because of Fourier encoding with gradients, outer k-space data contain higher spatial-frequency image components arising primarily from tissue boundaries. As a result of tissue-boundary sparsity in the human anatomy, neighboring k-space data correlation varies from the central to the outer k-space. By estimating k-space variant correlation functions with an iterative self-calibration method, correlation imaging can benefit from neighboring k-space data correlation associated with both coil sensitivity encoding and tissue-boundary sparsity, thereby providing a speed gain over parallel imaging that relies only on coil sensitivity encoding. This new approach is investigated in brain imaging and free-breathing neonatal cardiac imaging. Correlation imaging performs better than existing parallel imaging techniques in simulated brain imaging acceleration experiments. The higher speed enables real-time data acquisition for neonatal cardiac imaging in which physiological motion is fast and non-periodic. With k-space variant correlation functions, correlation imaging gives a higher speed than parallel imaging and offers the potential to image physiological motion in real-time. Magn Reson Med 79:1483-1494, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Free electron laser designs for laser amplification

    DOEpatents

    Prosnitz, Donald; Szoke, Abraham

    1985-01-01

    Method for laser beam amplification by means of free electron laser techniques. With wiggler magnetic field strength B.sub.w and wavelength .lambda..sub.w =2.pi./k.sub.w regarded as variable parameters, the method(s) impose conditions such as substantial constancy of B.sub.w /k.sub.w or k.sub.w or B.sub.w and k.sub.w (alternating), coupled with a choice of either constant resonant phase angle or programmed phase space "bucket" area.

  4. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  5. Structural Analysis of MoS2 and other 2D layered materials using LEEM/LEED-I(V) and STM

    NASA Astrophysics Data System (ADS)

    Grady, Maxwell; Dai, Zhongwei; Jin, Wencan; Dadap, Jerry; Osgood, Richard; Sadowski, Jerzy; Pohl, Karsten

    Layered two-dimensional materials, such as molybdenum disulfide, MoS2, are of interest for the development of many types of novel electronic devices. To fully understand the interfaces between these new materials, the atomic reconstructions at their surfaces must be understood. Low Energy Electron Microscopy and Diffraction, LEEM/ μLEED, present a unique method for rapid material characterization in real space and reciprocal space with high resolution. Here we present a study of the surface structure of 2H-MoS2 using μLEED intensity-voltage analysis. To aid this analysis, software is under development to automate the procedure of extracting I(V) curves from LEEM and LEED data. When matched with computational modeling, this data provides information with angstrom level resolution concerning the three dimensional atomic positions. We demonstrate that the surface structure of bulk MoS2 is distinct from the bulk crystal structure and exhibits a smaller surface relaxation at 320K compared to previous results at 95K. Furthermore, suspended monolayer samples exhibit large interlayer relaxations compared to the bulk surface termination. Further techniques for refining layer thickness determination are under development.

  6. On the Use of Enveloping Distribution Sampling (EDS) to Compute Free Enthalpy Differences between Different Conformational States of Molecules: Application to 310-, α-, and π-Helices.

    PubMed

    Lin, Zhixiong; Liu, Haiyan; Riniker, Sereina; van Gunsteren, Wilfred F

    2011-12-13

    Enveloping distribution sampling (EDS) is a powerful method to compute relative free energies from simulation. So far, the EDS method has only been applied to alchemical free energy differences, i.e., between different Hamiltonians defining different systems, and not yet to obtain free energy differences between different conformations or conformational states of a system. In this article, we extend the EDS formalism such that it can be applied to compute free energy differences of different conformations and apply it to compute the relative free enthalpy ΔG of 310-, α-, and π-helices of an alanine deca-peptide in explicit water solvent. The resulting ΔG values are compared to those obtained by standard thermodynamic integration (TI) and from so-called end-state simulations. A TI simulation requires the definition of a λ-dependent pathway which in the present case is based on hydrogen bonds of the different helical conformations. The values of ⟨(∂VTI)/(∂λ)⟩λ show a sharp change for a particular range of λ values, which is indicative of an energy barrier along the pathway, which lowers the accuracy of the resulting ΔG value. In contrast, in a two-state EDS simulation, an unphysical reference-state Hamiltonian which connects the parts of conformational space that are relevant to the different end states is constructed automatically; that is, no pathway needs to be defined. In the simulation using this reference state, both helices were sampled, and many transitions between them occurred, thus ensuring the accuracy of the resulting free enthalpy difference. According to the EDS simulations, the free enthalpy differences of the π-helix and the 310-helix versus the α-helix are 5 kJ mol(-1) and 47 kJ mol(-1), respectively, for an alanine deca-peptide in explicit SPC water solvent using the GROMOS 53A6 force field. The EDS method, which is a particular form of umbrella sampling, is thus applicable to compute free energy differences between conformational states as well as between systems and has definite advantages over the traditional TI and umbrella sampling methods to compute relative free energies.

  7. Failure of Taylor's hypothesis in the atmospheric surface layer and its correction for eddy-covariance measurements

    DOE PAGES

    Cheng, Yu; Sayde, Chadi; Li, Qi; ...

    2017-04-18

    Taylors’ frozen turbulence hypothesis suggests that all turbulent eddies are advected by the mean streamwise velocity, without changes in their properties. This hypothesis has been widely invoked to compute Reynolds’ averaging using temporal turbulence data measured at a single point in space. However, in the atmospheric surface layer, the exact relationship between convection velocity and wavenumber k has not been fully revealed since previous observations were limited by either their spatial resolution or by the sampling length. Using Distributed Temperature Sensing (DTS), acquiring turbulent temperature fluctuations at high temporal and spatial frequencies, we computed convection velocities across wavenumbers using amore » phase spectrum method. We found that convection velocity decreases as k –1/3 at the higher wavenumbers of the inertial subrange instead of being independent of wavenumber as suggested by Taylor's hypothesis. We further corroborated this result using large eddy simulations. Applying Taylor's hypothesis thus systematically underestimates turbulent spectrum in the inertial subrange. As a result, a correction is proposed for point-based eddy-covariance measurements, which can improve surface energy budget closure and estimates of CO 2 fluxes.« less

  8. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  9. A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Willert, Jeffrey; Park, H.; Knoll, D. A.

    2014-10-01

    Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.

  10. Complex energies and the polyelectronic Stark problem

    NASA Astrophysics Data System (ADS)

    Themelis, Spyros I.; Nicolaides, Cleanthes A.

    2000-12-01

    The problem of computing the energy shifts and widths of ground or excited N-electron atomic states perturbed by weak or strong static electric fields is dealt with by formulating a state-specific complex eigenvalue Schrödinger equation (CESE), where the complex energy contains the field-induced shift and width. The CESE is solved to all orders nonperturbatively, by using separately optimized N-electron function spaces, composed of real and complex one-electron functions, the latter being functions of a complex coordinate. The use of such spaces is a salient characteristic of the theory, leading to economy and manageability of calculation in terms of a two-step computational procedure. The first step involves only Hermitian matrices. The second adds complex functions and the overall computation becomes non-Hermitian. Aspects of the formalism and of computational strategy are compared with those of the complex absorption potential (CAP) method, which was recently applied for the calculation of field-induced complex energies in H and Li. Also compared are the numerical results of the two methods, and the questions of accuracy and convergence that were posed by Sahoo and Ho (Sahoo S and Ho Y K 2000 J. Phys. B: At. Mol. Opt. Phys. 33 2195) are explored further. We draw attention to the fact that, because in the region where the field strength is weak the tunnelling rate (imaginary part of the complex eigenvalue) diminishes exponentially, it is possible for even large-scale nonperturbative complex eigenvalue calculations either to fail completely or to produce seemingly stable results which, however, are wrong. It is in this context that the discrepancy in the width of Li 1s22s 2S between results obtained by the CAP method and those obtained by the CESE method is interpreted. We suggest that the very-weak-field regime must be computed by the golden rule, provided the continuum is represented accurately. In this respect, existing one-particle semiclassical formulae seem to be sufficient. In addition to the aforementioned comparisons and conclusions, we present a number of new results from the application of the state-specific CESE theory to the calculation of field-induced shifts and widths of the H n = 3 levels and of the prototypical Be 1s22s2 1S state, for a range of field strengths. Using the H n = 3 manifold as the example, it is shown how errors may occur for small values of the field, unless the function spaces are optimized carefully for each level.

  11. Elliptic flow computation by low Reynolds number two-equation turbulence models

    NASA Technical Reports Server (NTRS)

    Michelassi, V.; Shih, T.-H.

    1991-01-01

    A detailed comparison of ten low-Reynolds-number k-epsilon models is carried out. The flow solver, based on an implicit approximate factorization method, is designed for incompressible, steady two-dimensional flows. The conservation of mass is enforced by the artificial compressibility approach and the computational domain is discretized using centered finite differences. The turbulence model predictions of the flow past a hill are compared with experiments at Re = 10 exp 6. The effects of the grid spacing together with the numerical efficiency of the various formulations are investigated. The results show that the models provide a satisfactory prediction of the flow field in the presence of a favorable pressure gradient, while the accuracy rapidly deteriorates when a strong adverse pressure gradient is encountered. A newly proposed model form that does not explicitly depend on the wall distance seems promising for application to complex geometries.

  12. Numerical simulation of transient temperature profiles for canned apple puree in semi-rigid aluminum based packaging during pasteurization.

    PubMed

    Shafiekhani, Soraya; Zamindar, Nafiseh; Hojatoleslami, Mohammad; Toghraie, Davood

    2016-06-01

    Pasteurization of canned apple puree was simulated for a 3-D geometry in a semi-rigid aluminum based container which was heated from all sides at 378 K. The computational fluid dynamics code Ansys Fluent 14.0 was used and the governing equations for energy, momentum, and continuity were computed using a finite volume method. The food model was assumed to have temperature-dependent properties. To validate the simulation, the apple puree was pasteurized in a water cascading retort. The effect of the mesh structures was studied for the temperature profiles during thermal processing. The experimental temperature in the slowest heating zone in the container was compared with the temperature predicted by the model and the difference was not significant. The study also investigated the impact of head space (water-vapor) on heat transfer.

  13. On Algorithms for Generating Computationally Simple Piecewise Linear Classifiers

    DTIC Science & Technology

    1989-05-01

    suffers. - Waveform classification, e.g. speech recognition, seismic analysis (i.e. discrimination between earthquakes and nuclear explosions), target...assuming Gaussian distributions (B-G) d) Bayes classifier with probability densities estimated with the k-N-N method (B- kNN ) e) The -arest neighbour...range of classifiers are chosen including a fast, easy computable and often used classifier (B-G), reliable and complex classifiers (B- kNN and NNR

  14. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    NASA Technical Reports Server (NTRS)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With minor modifications the source code is portable to any platform that supports an ANSI FORTRAN 77 compiler. MicroSoft FORTRAN v2.1 is required for the Macintosh version. An executable is included with the PC version. DATASPACE is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) or on a 3.5 inch 800K Macintosh format diskette. This program was developed in 1991. IBM PC is a trademark of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh and MacOS are trademarks of Apple Computer, Inc.

  15. Optical Interconnection Via Computer-Generated Holograms

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Zhou, Shaomin

    1995-01-01

    Method of free-space optical interconnection developed for data-processing applications like parallel optical computing, neural-network computing, and switching in optical communication networks. In method, multiple optical connections between multiple sources of light in one array and multiple photodetectors in another array made via computer-generated holograms in electrically addressed spatial light modulators (ESLMs). Offers potential advantages of massive parallelism, high space-bandwidth product, high time-bandwidth product, low power consumption, low cross talk, and low time skew. Also offers advantage of programmability with flexibility of reconfiguration, including variation of strengths of optical connections in real time.

  16. HELIOS-R: An Ultrafast, Open-Source Retrieval Code For Exoplanetary Atmosphere Characterization

    NASA Astrophysics Data System (ADS)

    LAVIE, Baptiste

    2015-12-01

    Atmospheric retrieval is a growing, new approach in the theory of exoplanet atmosphere characterization. Unlike self-consistent modeling it allows us to fully explore the parameter space, as well as the degeneracies between the parameters using a Bayesian framework. We present HELIOS-R, a very fast retrieving code written in Python and optimized for GPU computation. Once it is ready, HELIOS-R will be the first open-source atmospheric retrieval code accessible to the exoplanet community. As the new generation of direct imaging instruments (SPHERE, GPI) have started to gather data, the first version of HELIOS-R focuses on emission spectra. We use a 1D two-stream forward model for computing fluxes and couple it to an analytical temperature-pressure profile that is constructed to be in radiative equilibrium. We use our ultra-fast opacity calculator HELIOS-K (also open-source) to compute the opacities of CO2, H2O, CO and CH4 from the HITEMP database. We test both opacity sampling (which is typically used by other workers) and the method of k-distributions. Using this setup, we compute a grid of synthetic spectra and temperature-pressure profiles, which is then explored using a nested sampling algorithm. By focusing on model selection (Occam’s razor) through the explicit computation of the Bayesian evidence, nested sampling allows us to deal with current sparse data as well as upcoming high-resolution observations. Once the best model is selected, HELIOS-R provides posterior distributions of the parameters. As a test for our code we studied HR8799 system and compared our results with the previous analysis of Lee, Heng & Irwin (2013), which used the proprietary NEMESIS retrieval code. HELIOS-R and HELIOS-K are part of the set of open-source community codes we named the Exoclimes Simulation Platform (www.exoclime.org).

  17. An immersed boundary method for modeling a dirty geometry data

    NASA Astrophysics Data System (ADS)

    Onishi, Keiji; Tsubokura, Makoto

    2017-11-01

    We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.

  18. TH-EF-BRA-06: A Novel Retrospective 3D K-Space Sorting 4D-MRI Technique Using a Radial K-Space Acquisition MRI Sequence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Subashi, E; Yin, F

    Purpose: Current retrospective 4D-MRI provides superior tumor-to-tissue contrast and accurate respiratory motion information for radiotherapy motion management. The developed 4D-MRI techniques based on 2D-MRI image sorting require a high frame-rate of the MR sequences. However, several MRI sequences provide excellent image quality but have low frame-rate. This study aims at developing a novel retrospective 3D k-space sorting 4D-MRI technique using radial k-space acquisition MRI sequences to improve 4D-MRI image quality and temporal-resolution for imaging irregular organ/tumor respiratory motion. Methods: The method is based on a RF-spoiled, steady-state, gradient-recalled sequence with minimal echo time. A 3D radial k-space data acquisition trajectorymore » was used for sampling the datasets. Each radial spoke readout data line starts from the 3D center of Field-of-View. Respiratory signal can be extracted from the k-space center data point of each spoke. The spoke data was sorted based on its self-synchronized respiratory signal using phase sorting. Subsequently, 3D reconstruction was conducted to generate the time-resolved 4D-MRI images. As a feasibility study, this technique was implemented on a digital human phantom XCAT. The respiratory motion was controlled by an irregular motion profile. To validate using k-space center data as a respiratory surrogate, we compared it with the XCAT input controlling breathing profile. Tumor motion trajectories measured on reconstructed 4D-MRI were compared to the average input trajectory. The mean absolute amplitude difference (D) was calculated. Results: The signal extracted from k-space center data matches well with the input controlling respiratory profile of XCAT. The relative amplitude error was 8.6% and the relative phase error was 3.5%. XCAT 4D-MRI demonstrated a clear motion pattern with little serrated artifacts. D of tumor trajectories was 0.21mm, 0.23mm and 0.23mm in SI, AP and ML directions, respectively. Conclusion: A novel retrospective 3D k-space sorting 4D-MRI technique has been developed and evaluated on human digital phantom. NIH (1R21CA165384-01A1)« less

  19. Studies of the resonance structure in D0→KS0K±π∓ decays

    NASA Astrophysics Data System (ADS)

    Aaij, R.; Adeva, B.; Adinolfi, M.; Affolder, A.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Anderson, J.; Andreassi, G.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Aquines Gutierrez, O.; Archilli, F.; d'Argent, P.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Bel, L. J.; Bellee, V.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Benton, J.; Berezhnoy, A.; Bernet, R.; Bertolin, A.; Bettler, M.-O.; van Beuzekom, M.; Bien, A.; Bifani, S.; Bird, T.; Birnkraut, A.; Bizzeti, A.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Bondar, A.; Bondar, N.; Bonivento, W.; Borghi, S.; Borsato, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Braun, S.; Brett, D.; Britsch, M.; Britton, T.; Brodzicka, J.; Brook, N. H.; Buchanan, E.; Bursche, A.; Buytaert, J.; Cadeddu, S.; Calabrese, R.; Calvi, M.; Calvo Gomez, M.; Campana, P.; Campora Perez, D.; Capriotti, L.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carniti, P.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Cassina, L.; Castillo Garcia, L.; Cattaneo, M.; Cauet, Ch.; Cavallero, G.; Cenci, R.; Charles, M.; Charpentier, Ph.; Chefdeville, M.; Chen, S.; Cheung, S.-F.; Chiapolini, N.; Chrzaszcz, M.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coco, V.; Cogan, J.; Cogneras, E.; Cogoni, V.; Cojocariu, L.; Collazuol, G.; Collins, P.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombes, M.; Coquereau, S.; Corti, G.; Corvo, M.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Crocombe, A.; Cruz Torres, M.; Cunliffe, S.; Currie, R.; D'Ambrosio, C.; Dall'Occo, E.; Dalseno, J.; David, P. N. Y.; Davis, A.; De Aguiar Francisco, O.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Simone, P.; Dean, C.-T.; Decamp, D.; Deckenhoff, M.; Del Buono, L.; Déléage, N.; Demmer, M.; Derkach, D.; Deschamps, O.; Dettori, F.; Dey, B.; Di Canto, A.; Di Ruscio, F.; Dijkstra, H.; Donleavy, S.; Dordei, F.; Dorigo, M.; Dosil Suárez, A.; Dossett, D.; Dovbnya, A.; Dreimanis, K.; Dufour, L.; Dujany, G.; Dupertuis, F.; Durante, P.; Dzhelyadin, R.; Dziurda, A.; Dzyuba, A.; Easo, S.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; El Rifai, I.; Elsasser, Ch.; Ely, S.; Esen, S.; Evans, H. M.; Evans, T.; Falabella, A.; Färber, C.; Farinelli, C.; Farley, N.; Farry, S.; Fay, R.; Ferguson, D.; Fernandez Albor, V.; Ferrari, F.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fiore, M.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fohl, K.; Fol, P.; Fontana, M.; Fontanelli, F.; Forty, R.; Frank, M.; Frei, C.; Frosini, M.; Fu, J.; Furfaro, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; García Pardiñas, J.; Garra Tico, J.; Garrido, L.; Gascon, D.; Gaspar, C.; Gauld, R.; Gavardi, L.; Gazzoni, G.; Gerick, D.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gianı, S.; Gibson, V.; Girard, O. G.; Giubega, L.; Gligorov, V. V.; Göbel, C.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gotti, C.; Grabalosa Gándara, M.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graverini, E.; Graziani, G.; Grecu, A.; Greening, E.; Gregson, S.; Griffith, P.; Grillo, L.; Grünberg, O.; Gui, B.; Gushchin, E.; Guz, Yu.; Gys, T.; Hadavizadeh, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hall, S.; Hamilton, B.; Han, X.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; He, J.; Head, T.; Heijne, V.; Hennessy, K.; Henrard, P.; Henry, L.; van Herwijnen, E.; Heß, M.; Hicheur, A.; Hill, D.; Hoballah, M.; Hombach, C.; Hulsbergen, W.; Humair, T.; Hussain, N.; Hutchcroft, D.; Hynds, D.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jaeger, A.; Jalocha, J.; Jans, E.; Jawahery, A.; Jing, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kandybei, S.; Kanso, W.; Karacson, M.; Karbach, T. M.; Karodia, S.; Kecke, M.; Kelsey, M.; Kenyon, I. R.; Kenzie, M.; Ketel, T.; Khanji, B.; Khurewathanakul, C.; Klaver, S.; Klimaszewski, K.; Kochebina, O.; Kolpin, M.; Komarov, I.; Koopman, R. F.; Koppenburg, P.; Kozeiha, M.; Kravchuk, L.; Kreplin, K.; Kreps, M.; Krocker, G.; Krokovny, P.; Kruse, F.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kuonen, A. K.; Kurek, K.; Kvaratskheliya, T.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lambert, D.; Lanfranchi, G.; Langenbruch, C.; Langhans, B.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Lees, J.-P.; Lefèvre, R.; Leflat, A.; Lefrançois, J.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, Y.; Likhomanenko, T.; Liles, M.; Lindner, R.; Linn, C.; Lionetto, F.; Liu, B.; Liu, X.; Loh, D.; Lohn, S.; Longstaff, I.; Lopes, J. H.; Lucchesi, D.; Lucio Martinez, M.; Luo, H.; Lupato, A.; Luppi, E.; Lupton, O.; Lusiani, A.; Machefert, F.; Maciuc, F.; Maev, O.; Maguire, K.; Malde, S.; Malinin, A.; Manca, G.; Mancinelli, G.; Manning, P.; Mapelli, A.; Maratas, J.; Marchand, J. F.; Marconi, U.; Marin Benito, C.; Marino, P.; Märki, R.; Marks, J.; Martellotti, G.; Martin, M.; Martinelli, M.; Martinez Santos, D.; Martinez Vidal, F.; Martins Tostes, D.; Massafferri, A.; Matev, R.; Mathad, A.; Mathe, Z.; Matteuzzi, C.; Mauri, A.; Maurin, B.; Mazurov, A.; McCann, M.; McCarthy, J.; McNab, A.; McNulty, R.; Meadows, B.; Meier, F.; Meissner, M.; Melnychuk, D.; Merk, M.; Michielin, E.; Milanes, D. A.; Minard, M.-N.; Mitzel, D. S.; Molina Rodriguez, J.; Monroy, I. A.; Monteil, S.; Morandin, M.; Morawski, P.; Mordà, A.; Morello, M. J.; Moron, J.; Morris, A. B.; Mountain, R.; Muheim, F.; Müller, D.; Müller, J.; Müller, K.; Müller, V.; Mussini, M.; Muster, B.; Naik, P.; Nakada, T.; Nandakumar, R.; Nandi, A.; Nasteva, I.; Needham, M.; Neri, N.; Neubert, S.; Neufeld, N.; Neuner, M.; Nguyen, A. D.; Nguyen, T. D.; Nguyen-Mau, C.; Niess, V.; Niet, R.; Nikitin, N.; Nikodem, T.; Novoselov, A.; O'Hanlon, D. P.; Oblakowska-Mucha, A.; Obraztsov, V.; Ogilvy, S.; Okhrimenko, O.; Oldeman, R.; Onderwater, C. J. G.; Osorio Rodrigues, B.; Otalora Goicochea, J. M.; Otto, A.; Owen, P.; Oyanguren, A.; Palano, A.; Palombo, F.; Palutan, M.; Panman, J.; Papanestis, A.; Pappagallo, M.; Pappalardo, L. L.; Pappenheimer, C.; Parkes, C.; Passaleva, G.; Patel, G. D.; Patel, M.; Patrignani, C.; Pearce, A.; Pellegrino, A.; Penso, G.; Pepe Altarelli, M.; Perazzini, S.; Perret, P.; Pescatore, L.; Petridis, K.; Petrolini, A.; Petruzzo, M.; Picatoste Olloqui, E.; Pietrzyk, B.; Pilař, T.; Pinci, D.; Pistone, A.; Piucci, A.; Playfer, S.; Plo Casasus, M.; Poikela, T.; Polci, F.; Poluektov, A.; Polyakov, I.; Polycarpo, E.; Popov, A.; Popov, D.; Popovici, B.; Potterat, C.; Price, E.; Price, J. D.; Prisciandaro, J.; Pritchard, A.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Punzi, G.; Qian, W.; Quagliani, R.; Rachwal, B.; Rademacker, J. H.; Rama, M.; Rangel, M. S.; Raniuk, I.; Rauschmayr, N.; Raven, G.; Redi, F.; Reichert, S.; Reid, M. M.; dos Reis, A. C.; Ricciardi, S.; Richards, S.; Rihl, M.; Rinnert, K.; Rives Molina, V.; Robbe, P.; Rodrigues, A. B.; Rodrigues, E.; Rodriguez Lopez, J. A.; Rodriguez Perez, P.; Roiser, S.; Romanovsky, V.; Romero Vidal, A.; Ronayne, J. W.; Rotondo, M.; Rouvinet, J.; Ruf, T.; Ruiz, H.; Ruiz Valls, P.; Saborido Silva, J. J.; Sagidova, N.; Sail, P.; Saitta, B.; Salustino Guimaraes, V.; Sanchez Mayordomo, C.; Sanmartin Sedes, B.; Santacesaria, R.; Santamarina Rios, C.; Santimaria, M.; Santovetti, E.; Sarti, A.; Satriano, C.; Satta, A.; Saunders, D. M.; Savrina, D.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmelzer, T.; Schmidt, B.; Schneider, O.; Schopper, A.; Schubiger, M.; Schune, M.-H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Semennikov, A.; Serra, N.; Serrano, J.; Sestini, L.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, V.; Shires, A.; Siddi, B. G.; Silva Coutinho, R.; Silva de Oliveira, L.; Simi, G.; Sirendi, M.; Skidmore, N.; Skwarnicki, T.; Smith, E.; Smith, E.; Smith, I. T.; Smith, J.; Smith, M.; Snoek, H.; Sokoloff, M. D.; Soler, F. J. P.; Soomro, F.; Souza, D.; Souza De Paula, B.; Spaan, B.; Spradlin, P.; Sridharan, S.; Stagni, F.; Stahl, M.; Stahl, S.; Steinkamp, O.; Stenyakin, O.; Sterpka, F.; Stevenson, S.; Stoica, S.; Stone, S.; Storaci, B.; Stracka, S.; Straticiuc, M.; Straumann, U.; Sun, L.; Sutcliffe, W.; Swientek, K.; Swientek, S.; Syropoulos, V.; Szczekowski, M.; Szczypka, P.; Szumlak, T.; T'Jampens, S.; Tayduganov, A.; Tekampe, T.; Teklishyn, M.; Tellarini, G.; Teubert, F.; Thomas, C.; Thomas, E.; van Tilburg, J.; Tisserand, V.; Tobin, M.; Todd, J.; Tolk, S.; Tomassetti, L.; Tonelli, D.; Topp-Joergensen, S.; Torr, N.; Tournefier, E.; Tourneur, S.; Trabelsi, K.; Tran, M. T.; Tresch, M.; Trisovic, A.; Tsaregorodtsev, A.; Tsopelas, P.; Tuning, N.; Ukleja, A.; Ustyuzhanin, A.; Uwer, U.; Vacca, C.; Vagnoni, V.; Valenti, G.; Vallier, A.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vázquez Sierra, C.; Vecchi, S.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Vesterinen, M.; Viaud, B.; Vieira, D.; Vieites Diaz, M.; Vilasis-Cardona, X.; Volkov, V.; Vollhardt, A.; Volyanskyy, D.; Voong, D.; Vorobyev, A.; Vorobyev, V.; Voß, C.; de Vries, J. A.; Waldi, R.; Wallace, C.; Wallace, R.; Walsh, J.; Wandernoth, S.; Wang, J.; Ward, D. R.; Watson, N. K.; Websdale, D.; Weiden, A.; Whitehead, M.; Wilkinson, G.; Wilkinson, M.; Williams, M.; Williams, M. P.; Williams, M.; Williams, T.; Wilson, F. F.; Wimberley, J.; Wishahi, J.; Wislicki, W.; Witek, M.; Wormser, G.; Wotton, S. A.; Wright, S.; Wyllie, K.; Xie, Y.; Xu, Z.; Yang, Z.; Yu, J.; Yuan, X.; Yushchenko, O.; Zangoli, M.; Zavertyaev, M.; Zhang, L.; Zhang, Y.; Zhelezov, A.; Zhokhov, A.; Zhong, L.; Zucchelli, S.; LHCb Collaboration

    2016-03-01

    Amplitude models are applied to studies of resonance structure in D0→KS0K-π+ and D0→KS0K+π- decays using p p collision data corresponding to an integrated luminosity of 3.0 fb-1 collected by the LHCb experiment. Relative magnitude and phase information is determined, and coherence factors and related observables are computed for both the whole phase space and a restricted region of 100 MeV /c2 around the K*(892 )± resonance. Two formulations for the K π S -wave are used, both of which give a good description of the data. The ratio of branching fractions B (D0→KS0K+π- )/B (D0→KS0K-π+ ) is measured to be 0.655 ±0.004 (stat ) ±0.006 (syst ) over the full phase space and 0.370 ±0.003 (stat ) ±0.012 (syst ) in the restricted region. A search for C P violation is performed using the amplitude models and no significant effect is found. Predictions from SU(3) flavor symmetry for K*(892 ) K amplitudes of different charges are compared with the amplitude model results.

  20. Potential energy surface and rate coefficients of protonated cyanogen (HNCCN+) induced by collision with helium (He) at low temperature

    NASA Astrophysics Data System (ADS)

    Bop, Cheikh T.; Faye, N. AB; Hammami, K.

    2018-05-01

    Nitriles have been identified in space. Accurately modeling their abundance requires calculations of collisional rate coefficients. These data are obtained by first computing potential energy surfaces (PES) and cross-sections using high accurate quantum methods. In this paper, we report the first interaction potential of the HNCCN+-He collisional system along with downward rate coefficients among the 11 lowest rotational levels of HNCCN+. The PES was calculated using the explicitly correlated coupled cluster approach with simple, second and non-iterative triple excitation (CCSD(T)-F12) in conjunction with the augmented-correlation consistent-polarized valence triple zeta (aug-cc-pVTZ) Gaussian basis set. It presents two local minima of ˜283 and ˜136 cm-1, the deeper one is located at R = 9 a0 towards the H end (HeṡṡṡHNCCN+). Using the so-computed PES, we calculated rotational cross-sections of HNCCN+ induced by collision with He for energies ranging up to 500 cm-1 with the exact quantum mechanical close coupling (CC) method. Downward rate coefficients were then worked out by thermally averaging the cross-sections at low temperature (T ≤ 100 K). The discussion on propensity rules showed that the odd Δj transitions were favored. The results obtained in this work may be crucially needed to accurately model the abundance of cyanogen and its protonated form in space.

  1. COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES

    PubMed Central

    Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus

    2017-01-01

    The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875

  2. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.

    PubMed

    Wang, Xueyi

    2012-02-08

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.

  3. KSC-99pp1225

    NASA Image and Video Library

    1999-10-06

    Children at Coquina Elementary School, Titusville, Fla., excitedly tear into the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  4. KSC-99pp1224

    NASA Image and Video Library

    1999-10-06

    Children at Coquina Elementary School, Titusville, Fla., eagerly tear into the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  5. KSC-99pp1222

    NASA Image and Video Library

    1999-10-06

    Children at Coquina Elementary School, Titusville, Fla., look with curiosity at the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  6. KSC-99pp1223

    NASA Image and Video Library

    1999-10-06

    Children at Coquina Elementary School, Titusville, Fla., "practice" using a computer keyboard, part of equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  7. A New Soft Computing Method for K-Harmonic Means Clustering.

    PubMed

    Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe

    2016-01-01

    The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.

  8. Reviews.

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 1988

    1988-01-01

    Reviews two computer programs: "Molecular Graphics," which allows molecule manipulation in three-dimensional space (requiring IBM PC with 512K, EGA monitor, and math coprocessor); and "Periodic Law," a database which contains up to 20 items of information on each of the first 103 elements (Apple II or IBM PC). (MVL)

  9. Exact milestoning

    PubMed Central

    Bello-Rivas, Juan M.; Elber, Ron

    2015-01-01

    A new theory and an exact computer algorithm for calculating kinetics and thermodynamic properties of a particle system are described. The algorithm avoids trapping in metastable states, which are typical challenges for Molecular Dynamics (MD) simulations on rough energy landscapes. It is based on the division of the full space into Voronoi cells. Prior knowledge or coarse sampling of space points provides the centers of the Voronoi cells. Short time trajectories are computed between the boundaries of the cells that we call milestones and are used to determine fluxes at the milestones. The flux function, an essential component of the new theory, provides a complete description of the statistical mechanics of the system at the resolution of the milestones. We illustrate the accuracy and efficiency of the exact Milestoning approach by comparing numerical results obtained on a model system using exact Milestoning with the results of long trajectories and with a solution of the corresponding Fokker-Planck equation. The theory uses an equation that resembles the approximate Milestoning method that was introduced in 2004 [A. K. Faradjian and R. Elber, J. Chem. Phys. 120(23), 10880-10889 (2004)]. However, the current formulation is exact and is still significantly more efficient than straightforward MD simulations on the system studied. PMID:25747056

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bello-Rivas, Juan M.; Elber, Ron; Department of Chemistry, University of Texas at Austin, Austin, Texas 78712

    A new theory and an exact computer algorithm for calculating kinetics and thermodynamic properties of a particle system are described. The algorithm avoids trapping in metastable states, which are typical challenges for Molecular Dynamics (MD) simulations on rough energy landscapes. It is based on the division of the full space into Voronoi cells. Prior knowledge or coarse sampling of space points provides the centers of the Voronoi cells. Short time trajectories are computed between the boundaries of the cells that we call milestones and are used to determine fluxes at the milestones. The flux function, an essential component of themore » new theory, provides a complete description of the statistical mechanics of the system at the resolution of the milestones. We illustrate the accuracy and efficiency of the exact Milestoning approach by comparing numerical results obtained on a model system using exact Milestoning with the results of long trajectories and with a solution of the corresponding Fokker-Planck equation. The theory uses an equation that resembles the approximate Milestoning method that was introduced in 2004 [A. K. Faradjian and R. Elber, J. Chem. Phys. 120(23), 10880-10889 (2004)]. However, the current formulation is exact and is still significantly more efficient than straightforward MD simulations on the system studied.« less

  11. Identification of Predictive Cis-Regulatory Elements Using a Discriminative Objective Function and a Dynamic Search Space

    PubMed Central

    Karnik, Rahul; Beer, Michael A.

    2015-01-01

    The generation of genomic binding or accessibility data from massively parallel sequencing technologies such as ChIP-seq and DNase-seq continues to accelerate. Yet state-of-the-art computational approaches for the identification of DNA binding motifs often yield motifs of weak predictive power. Here we present a novel computational algorithm called MotifSpec, designed to find predictive motifs, in contrast to over-represented sequence elements. The key distinguishing feature of this algorithm is that it uses a dynamic search space and a learned threshold to find discriminative motifs in combination with the modeling of motifs using a full PWM (position weight matrix) rather than k-mer words or regular expressions. We demonstrate that our approach finds motifs corresponding to known binding specificities in several mammalian ChIP-seq datasets, and that our PWMs classify the ChIP-seq signals with accuracy comparable to, or marginally better than motifs from the best existing algorithms. In other datasets, our algorithm identifies novel motifs where other methods fail. Finally, we apply this algorithm to detect motifs from expression datasets in C. elegans using a dynamic expression similarity metric rather than fixed expression clusters, and find novel predictive motifs. PMID:26465884

  12. Identification of Predictive Cis-Regulatory Elements Using a Discriminative Objective Function and a Dynamic Search Space.

    PubMed

    Karnik, Rahul; Beer, Michael A

    2015-01-01

    The generation of genomic binding or accessibility data from massively parallel sequencing technologies such as ChIP-seq and DNase-seq continues to accelerate. Yet state-of-the-art computational approaches for the identification of DNA binding motifs often yield motifs of weak predictive power. Here we present a novel computational algorithm called MotifSpec, designed to find predictive motifs, in contrast to over-represented sequence elements. The key distinguishing feature of this algorithm is that it uses a dynamic search space and a learned threshold to find discriminative motifs in combination with the modeling of motifs using a full PWM (position weight matrix) rather than k-mer words or regular expressions. We demonstrate that our approach finds motifs corresponding to known binding specificities in several mammalian ChIP-seq datasets, and that our PWMs classify the ChIP-seq signals with accuracy comparable to, or marginally better than motifs from the best existing algorithms. In other datasets, our algorithm identifies novel motifs where other methods fail. Finally, we apply this algorithm to detect motifs from expression datasets in C. elegans using a dynamic expression similarity metric rather than fixed expression clusters, and find novel predictive motifs.

  13. Computing a Comprehensible Model for Spam Filtering

    NASA Astrophysics Data System (ADS)

    Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael

    In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.

  14. 2D Mesh Manipulation

    DTIC Science & Technology

    2011-11-01

    the Poisson form of the equations can also be generated by manipulating the computational space , so forcing functions become superfluous . The...ABSTRACT Unstructured methods for region discretization have become common in computational fluid dynamics (CFD) analysis because of certain benefits...application of Winslow elliptic smoothing equations to unstructured meshes. It has been shown that it is not necessary for the computational space of

  15. Exact Dispersion Study of an Asymmetric Thin Planar Slab Dielectric Waveguide without Computing {d^2}β/{d{k^2}} Numerically

    NASA Astrophysics Data System (ADS)

    Raghuwanshi, Sanjeev Kumar; Palodiya, Vikram

    2017-08-01

    Waveguide dispersion can be tailored but not the material dispersion. Hence, the total dispersion can be shifted at any desired band by adjusting the waveguide dispersion. Waveguide dispersion is proportional to {d^2}β/d{k^2} and need to be computed numerically. In this paper, we have tried to compute analytical expression for {d^2}β/d{k^2} in terms of {d^2}β/d{k^2} accurately with numerical technique, ≈ 10^{-5} decimal point. This constraint sometimes generates the error in calculation of waveguide dispersion. To formulate the problem we will use the graphical method. Our study reveals that we can compute the waveguide dispersion enough accurately for various modes by knowing - β only.

  16. Determination of small-field correction factors for cylindrical ionization chambers using a semiempirical method

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won

    2016-02-01

    A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1  ×  1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.

  17. A Simple but Powerful Heuristic Method for Accelerating k-Means Clustering of Large-Scale Data in Life Science.

    PubMed

    Ichikawa, Kazuki; Morishita, Shinichi

    2014-01-01

    K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.

  18. Molecular-dynamics simulations of self-assembled monolayers (SAM) on parallel computers

    NASA Astrophysics Data System (ADS)

    Vemparala, Satyavani

    The purpose of this dissertation is to investigate the properties of self-assembled monolayers, particularly alkanethiols and Poly (ethylene glycol) terminated alkanethiols. These simulations are based on realistic interatomic potentials and require scalable and portable multiresolution algorithms implemented on parallel computers. Large-scale molecular dynamics simulations of self-assembled alkanethiol monolayer systems have been carried out using an all-atom model involving a million atoms to investigate their structural properties as a function of temperature, lattice spacing and molecular chain-length. Results show that the alkanethiol chains tilt from the surface normal by a collective angle of 25° along next-nearest neighbor direction at 300K. At 350K the system transforms to a disordered phase characterized by small tilt angle, flexible tilt direction, and random distribution of backbone planes. With increasing lattice spacing, a, the tilt angle increases rapidly from a nearly zero value at a = 4.7A to as high as 34° at a = 5.3A at 300K. We also studied the effect of end groups on the tilt structure of SAM films. We characterized the system with respect to temperature, the alkane chain length, lattice spacing, and the length of the end group. We found that the gauche defects were predominant only in the tails, and the gauche defects increased with the temperature and number of EG units. Effect of electric field on the structure of poly (ethylene glycol) (PEG) terminated alkanethiol self assembled monolayer (SAM) on gold has been studied using parallel molecular dynamics method. An applied electric field triggers a conformational transition from all-trans to a mostly gauche conformation. The polarity of the electric field has a significant effect on the surface structure of PEG leading to a profound effect on the hydrophilicity of the surface. The electric field applied anti-parallel to the surface normal causes a reversible transition to an ordered state in which the oxygen atoms are exposed. On the other hand, an electric field applied in a direction parallel to the surface normal introduces considerable disorder in the system and the oxygen atoms are buried inside.

  19. On the rate of convergence of the alternating projection method in finite dimensional spaces

    NASA Astrophysics Data System (ADS)

    Galántai, A.

    2005-10-01

    Using the results of Smith, Solmon, and Wagner [K. Smith, D. Solomon, S. Wagner, Practical and mathematical aspects of the problem of reconstructing objects from radiographs, Bull. Amer. Math. Soc. 83 (1977) 1227-1270] and Nelson and Neumann [S. Nelson, M. Neumann, Generalizations of the projection method with application to SOR theory for Hermitian positive semidefinite linear systems, Numer. Math. 51 (1987) 123-141] we derive new estimates for the speed of the alternating projection method and its relaxed version in . These estimates can be computed in at most O(m3) arithmetic operations unlike the estimates in papers mentioned above that require spectral information. The new and old estimates are equivalent in many practical cases. In cases when the new estimates are weaker, the numerical testing indicates that they approximate the original bounds in papers mentioned above quite well.

  20. Computer acquired performance data from an etched-rhenium, molybdenum planar diode

    NASA Technical Reports Server (NTRS)

    Manista, E. J.

    1972-01-01

    Performance data from an etched-rhenium, molybdenum thermionic converter are presented. The planar converter has a guard-ringed collector and a fixed spacing of 0.254 mm (10 mils). The data were acquired by using a computer and are available on microfiche as individual or composite parametric current, voltage curves. The parameters are the temperatures of the emitter T sub E, collector T sub C and cesium reservoir T sub R. The composite plots have constant T sub E, and varying T sub C or T sub R, or both. The envelope and composite plots having constant I sub E are presented. The diode was tested at increments between 1500 and 2000 K for the emitter, 750 and 1100 K for the collector, and 540 and 640 K for the reservoir. In all, 774 individual current, voltage curves were obtained.

  1. Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions

    ERIC Educational Resources Information Center

    Moreira, M. V.; Basilio, J. C.

    2012-01-01

    All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…

  2. Navigator Accuracy Requirements for Prospective Motion Correction

    PubMed Central

    Maclaren, Julian; Speck, Oliver; Stucht, Daniel; Schulze, Peter; Hennig, Jürgen; Zaitsev, Maxim

    2010-01-01

    Prospective motion correction in MR imaging is becoming increasingly popular to prevent the image artefacts that result from subject motion. Navigator information is used to update the position of the imaging volume before every spin excitation so that lines of acquired k-space data are consistent. Errors in the navigator information, however, result in residual errors in each k-space line. This paper presents an analysis linking noise in the tracking system to the power of the resulting image artefacts. An expression is formulated for the required navigator accuracy based on the properties of the imaged object and the desired resolution. Analytical results are compared with computer simulations and experimental data. PMID:19918892

  3. RAXBOD- INVISCID TRANSONIC FLOW OVER AXISYMMETRIC BODIES

    NASA Technical Reports Server (NTRS)

    Keller, J. D.

    1994-01-01

    The problem of axisymmetric transonic flow is of interest not only because of the practical application to missile and launch vehicle aerodynamics, but also because of its relation to fully three-dimensional flow in terms of the area rule. The RAXBOD computer program was developed for the analysis of steady, inviscid, irrotational, transonic flow over axisymmetric bodies in free air. RAXBOD uses a finite-difference relaxation method to numerically solve the exact formulation of the disturbance velocity potential with exact surface boundary conditions. Agreement with available experimental results has been good in cases where viscous effects and wind-tunnel wall interference are not important. The governing second-order partial differential equation describing the flow potential is replaced by a system of finite difference equations, including Jameson's "rotated" difference scheme at supersonic points. A stretching is applied to both the normal and tangential coordinates such that the infinite physical space is mapped onto a finite computational space. The boundary condition at infinity can be applied directly and there is no need for an asymptotic far-field solution. The system of finite difference equations is solved by a column relaxation method. In order to obtain both rapid convergence and any desired resolution, the relaxation is performed iteratively on successively refined grids. Input to RAXBOD consists of a description of the body geometry, the free stream conditions, and the desired resolution control parameters. Output from RAXBOD includes computed geometric parameters in the normal and tangential directions, iteration history information, drag coefficients, flow field data in the computational plane, and coordinates of the sonic line. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6600 computer with an overlayed central memory requirement of approximately 40K (octal) of 60 bit words. Optional plotted output can be generated for the Calcomp plotting system. The RAXBOD program was developed in 1976.

  4. A k-Vector Approach to Sampling, Interpolation, and Approximation

    NASA Astrophysics Data System (ADS)

    Mortari, Daniele; Rogers, Jonathan

    2013-12-01

    The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.

  5. Structural Analysis Methods for Structural Health Management of Future Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander

    2007-01-01

    Two finite element based computational methods, Smoothing Element Analysis (SEA) and the inverse Finite Element Method (iFEM), are reviewed, and examples of their use for structural health monitoring are discussed. Due to their versatility, robustness, and computational efficiency, the methods are well suited for real-time structural health monitoring of future space vehicles, large space structures, and habitats. The methods may be effectively employed to enable real-time processing of sensing information, specifically for identifying three-dimensional deformed structural shapes as well as the internal loads. In addition, they may be used in conjunction with evolutionary algorithms to design optimally distributed sensors. These computational tools have demonstrated substantial promise for utilization in future Structural Health Management (SHM) systems.

  6. Numerical-experimental analysis of a carbon-phenolic composite via plasma jet ablation test

    NASA Astrophysics Data System (ADS)

    Guilherme Silva Pesci, Pedro; Araújo Machado, Humberto; Silva, Homero de Paula e.; Cley Paterniani Rita, Cristian; Petraconi Filho, Gilberto; Cocchieri Botelho, Edson

    2018-06-01

    Materials used in space vehicles components are subjected to thermally aggressive environments when exposed to atmospheric reentry. In order to protect the payload and the vehicle itself, ablative composites are employed as TPS (Thermal Protection System). The development of TPS materials generally go through phases of obtaining, atmospheric reentry tests and comparison with a mathematical model. The state of the art presents some reentry tests in a subsonic or supersonic arc-jet facility, and a complex type of mathematical model, which normally requires large computational cost. This work presents a reliable method for estimate the performance of ablative composites, combining empirical and experimental data. Tests of composite materials used in thermal protection systems through exposure to a plasma jet are performed, where the heat fluxes emulate those present in atmospheric reentry of space vehicles components. The carbon/phenolic material samples have been performed in the hypersonic plasma tunnel of Plasma and Process Laboratory, available in Aeronautics Institute of Technology (ITA), by a plasma torch with a 50 kW DC power source. The plasma tunnel parameters were optimized to reproduce the conditions close to the critical re-entry point of the space vehicles payloads developed by the Aeronautics and Space Institute (IAE). The specimens in study were developed and manufactured in Brazil. Mass loss and specific mass loss rates of the samples and the back surface temperatures, as a function of the exposure time to the thermal flow, were determined. A computational simulation based in a two-front ablation model was performed, in order to compare the tests and the simulation results. The results allowed to estimate the ablative behavior of the tested material and to validate the theoretical model used in the computational simulation for its use in geometries close to the thermal protection systems used in the Brazilian space and suborbital vehicles.

  7. SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, B; Gao, H

    Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less

  8. Computation of p -units in ray class fields of real quadratic number fields

    NASA Astrophysics Data System (ADS)

    Chapdelaine, Hugo

    2009-12-01

    Let K be a real quadratic field, let p be a prime number which is inert in K and let K_p be the completion of K at p . As part of a Ph.D. thesis, we constructed a certain p -adic invariant uin K_p^{times} , and conjectured that u is, in fact, a p -unit in a suitable narrow ray class field of K . In this paper we give numerical evidence in support of that conjecture. Our method of computation is similar to the one developed by Dasgupta and relies on partial modular symbols attached to Eisenstein series.

  9. Ways of achieving continuous service from computers

    NASA Technical Reports Server (NTRS)

    Quinn, M. J., Jr.

    1974-01-01

    This paper outlines the methods used in the real-time computer complex to keep computers operating. Methods include selectover, high-speed restart, and low-speed restart. The hardware and software needed to implement these methods is discussed as well as the system recovery facility, alternate device support, and timeout. In general, methods developed while supporting the Gemini, Apollo, and Skylab space missions are presented.

  10. Exhaustive Search for Sparse Variable Selection in Linear Regression

    NASA Astrophysics Data System (ADS)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  11. Calculation of Free Energy Landscape in Multi-Dimensions with Hamiltonian-Exchange Umbrella Sampling on Petascale Supercomputer.

    PubMed

    Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît

    2012-11-13

    An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.

  12. The probabilistic convolution tree: efficient exact Bayesian inference for faster LC-MS/MS protein inference.

    PubMed

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called "causal independence"). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to O(k log(k)2) and the space to O(k log(k)) where k is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions.

  13. A novel harmony search-K means hybrid algorithm for clustering gene expression data

    PubMed Central

    Nazeer, KA Abdul; Sebastian, MP; Kumar, SD Madhu

    2013-01-01

    Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms. PMID:23390351

  14. A novel harmony search-K means hybrid algorithm for clustering gene expression data.

    PubMed

    Nazeer, Ka Abdul; Sebastian, Mp; Kumar, Sd Madhu

    2013-01-01

    Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms.

  15. Collective Properties of Neural Systems and Their Relation to Other Physical Models

    DTIC Science & Technology

    1988-08-05

    been computed explicitly. This has been achieved algorithmically by utilizing methods introduced earlier. It should be emphasized that in addition to...Research Institute for Mathematical Sciences. K’oto Universin. K roto 606. .apan and E. BAROUCH Department of Mathematics and Computer Sciene. Clarkon...Mathematics and Computer Science, Clarkson University, where this work was collaborated. References I. IBabu, S. V. and Barouch E., An exact soIlution for the

  16. A finite area scheme for shallow granular flows on three-dimensional surfaces

    NASA Astrophysics Data System (ADS)

    Rauter, Matthias

    2017-04-01

    Shallow granular flow models have become a popular tool for the estimation of natural hazards, such as landslides, debris flows and avalanches. The shallowness of the flow allows to reduce the three-dimensional governing equations to a quasi two-dimensional system. Three-dimensional flow fields are replaced by their depth-integrated two-dimensional counterparts, which yields a robust and fast method [1]. A solution for a simple shallow granular flow model, based on the so-called finite area method [3] is presented. The finite area method is an adaption of the finite volume method [4] to two-dimensional curved surfaces in three-dimensional space. This method handles the three dimensional basal topography in a simple way, making the model suitable for arbitrary (but mildly curved) topography, such as natural terrain. Furthermore, the implementation into the open source software OpenFOAM [4] is shown. OpenFOAM is a popular computational fluid dynamics application, designed so that the top-level code mimics the mathematical governing equations. This makes the code easy to read and extendable to more sophisticated models. Finally, some hints on how to get started with the code and how to extend the basic model will be given. I gratefully acknowledge the financial support by the OEAW project "beyond dense flow avalanches". Savage, S. B. & Hutter, K. 1989 The motion of a finite mass of granular material down a rough incline. Journal of Fluid Mechanics 199, 177-215. Ferziger, J. & Peric, M. 2002 Computational methods for fluid dynamics, 3rd edn. Springer. Tukovic, Z. & Jasak, H. 2012 A moving mesh finite volume interface tracking method for surface tension dominated interfacial fluid flow. Computers & fluids 55, 70-84. Weller, H. G., Tabor, G., Jasak, H. & Fureby, C. 1998 A tensorial approach to computational continuum mechanics using object-oriented techniques. Computers in physics 12(6), 620-631.

  17. "A space-time ensemble Kalman filter for state and parameter estimation of groundwater transport models"

    NASA Astrophysics Data System (ADS)

    Briseño, Jessica; Herrera, Graciela S.

    2010-05-01

    Herrera (1998) proposed a method for the optimal design of groundwater quality monitoring networks that involves space and time in a combined form. The method was applied later by Herrera et al (2001) and by Herrera and Pinder (2005). To get the estimates of the contaminant concentration being analyzed, this method uses a space-time ensemble Kalman filter, based on a stochastic flow and transport model. When the method is applied, it is important that the characteristics of the stochastic model be congruent with field data, but, in general, it is laborious to manually achieve a good match between them. For this reason, the main objective of this work is to extend the space-time ensemble Kalman filter proposed by Herrera, to estimate the hydraulic conductivity, together with hydraulic head and contaminant concentration, and its application in a synthetic example. The method has three steps: 1) Given the mean and the semivariogram of the natural logarithm of hydraulic conductivity (ln K), random realizations of this parameter are obtained through two alternatives: Gaussian simulation (SGSim) and Latin Hypercube Sampling method (LHC). 2) The stochastic model is used to produce hydraulic head (h) and contaminant (C) realizations, for each one of the conductivity realizations. With these realization the mean of ln K, h and C are obtained, for h and C, the mean is calculated in space and time, and also the cross covariance matrix h-ln K-C in space and time. The covariance matrix is obtained averaging products of the ln K, h and C realizations on the estimation points and times, and the positions and times with data of the analyzed variables. The estimation points are the positions at which estimates of ln K, h or C are gathered. In an analogous way, the estimation times are those at which estimates of any of the three variables are gathered. 3) Finally the ln K, h and C estimate are obtained using the space-time ensemble Kalman filter. The realization mean for each one of the variables is used as the prior space-time estimate for the Kalman filter, and the space-time cross-covariance matrix of h-ln K-C as the prior estimate-error covariance-matrix. The synthetic example has a modeling area of 700 x 700 square meters; a triangular mesh model with 702 nodes and 1306 elements is used. A pumping well located in the central part of the study area is considered. For the contaminant transport model, a contaminant source area is present in the western part of the study area. The estimation points for hydraulic conductivity, hydraulic head and contaminant concentrations are located on a submesh of the model mesh (same location for h, ln K and c), composed by 48 nodes spread throughout the study area, with an approximately separation of 90 meters between nodes. The results analysis was done through the mean error, root mean square error, initial and final estimation maps of h, ln K and C at each time, and the initial and final variance maps of h, ln K and C. To obtain model convergence, 3000 realizations of ln K were required using SGSim, and only 1000 with LHC. The results show that for both alternatives, the Kalman filter estimates for h, ln K and C using h and C data, have errors which magnitudes decrease as data is added. HERRERA, G. S.(1998), Cost Effective Groundwater Quality Sampling Network Design. Ph. D. thesis, University of Vermont, Burlington, Vermont, 172 pp. HERRERA G., GUARNACCIA J., PINDER G. Y SIMUTA R.(2001),"Diseño de redes de monitoreo de la calidad del agua subterránea eficientes", Proceedings of the 2001 International Symposium on Environmental Hydraulics, Arizona, U.S.A. HERRERA G. S. and PINDER G.F. (2005), Space-time optimization of groundwater quality sampling networks Water Resour. Res., Vol. 41, No. 12, W12407, 10.1029/2004WR003626.

  18. Sample-space-based feature extraction and class preserving projection for gene expression data.

    PubMed

    Wang, Wenjun

    2013-01-01

    In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.

  19. The use of wavenumber normalization in computing spatially averaged coherencies (KRSPAC) of microtremor data from asymmetric arrays

    USGS Publications Warehouse

    Asten, M.W.; Stephenson, William J.; Hartzell, Stephen

    2015-01-01

    The SPAC method of processing microtremor noise observations for estimation of Vs profiles has a limitation that the array has circular or triangular symmetry in order to allow spatial (azimuthal) averaging of inter-station coherencies over a constant station separation. Common processing methods allow for station separations to vary by typically ±10% in the azimuthal averaging before degradation of the SPAC spectrum is excessive. A limitation on use of high-wavenumbers in inversions of SPAC spectra to Vs profiles has been the requirement for exact array symmetry to avoid loss of information in the azimuthal averaging step. In this paper we develop a new wavenumber-normalised SPAC method (KRSPAC) where instead of performing averaging of sets of coherency versus frequency spectra and then fitting to a model SPAC spectrum, we interpolate each spectrum to coherency versus k.r, where k and r are wavenumber and station separation respectively, and r may be different for each pair of stations. For fundamental mode Rayleigh-wave energy the model SPAC spectrum to be fitted reduces to Jo(kr). The normalization process changes with each iteration since k is a function of frequency and phase velocity and hence is updated each iteration. The method proves robust and is demonstrated on data acquired in the Santa Clara Valley, CA, (Site STGA) where an asymmetric array having station separations varying by a factor of 2 is compared with a conventional triangular array; a 300-mdeep borehole with a downhole Vs log provides nearby ground truth. The method is also demonstrated on data from the Pleasanton array, CA, where station spacings are irregular and vary from 400 to 1200 m. The KRSPAC method allows inversion of data using kr (unitless) values routinely up to 30, and occasionally up to 60. Thus despite the large and irregular station spacings, this array permits resolution of Vs as fine as 15 m for the near-surface sediments, and down to a maximum depth of 2.5 km.

  20. Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro

    2018-04-16

    In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.

  1. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces

    PubMed Central

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2010-01-01

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method. PMID:20182556

  2. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces.

    PubMed

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2008-07-03

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm's behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method.

  3. A Method for Measuring Collection Expansion Rates and Shelf Space Capacities.

    ERIC Educational Resources Information Center

    Sapp, Gregg; Suttle, George

    1994-01-01

    Describes an effort to quantify annual collection expansion and shelf space capacities with a computer spreadsheet program. Methods used to quantify the space taken at the beginning of the project; to estimate annual rate of collection growth; and to plot stack space and usage, volume equivalents and usage, and growth capacity are covered.…

  4. A unified radiative magnetohydrodynamics code for lightning-like discharge simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qiang, E-mail: cq0405@126.com; Chen, Bin, E-mail: emcchen@163.com; Xiong, Run

    2014-03-15

    A two-dimensional Eulerian finite difference code is developed for solving the non-ideal magnetohydrodynamic (MHD) equations including the effects of self-consistent magnetic field, thermal conduction, resistivity, gravity, and radiation transfer, which when combined with specified pulse current models and plasma equations of state, can be used as a unified lightning return stroke solver. The differential equations are written in the covariant form in the cylindrical geometry and kept in the conservative form which enables some high-accuracy shock capturing schemes to be equipped in the lightning channel configuration naturally. In this code, the 5-order weighted essentially non-oscillatory scheme combined with Lax-Friedrichs fluxmore » splitting method is introduced for computing the convection terms of the MHD equations. The 3-order total variation diminishing Runge-Kutta integral operator is also equipped to keep the time-space accuracy of consistency. The numerical algorithms for non-ideal terms, e.g., artificial viscosity, resistivity, and thermal conduction, are introduced in the code via operator splitting method. This code assumes the radiation is in local thermodynamic equilibrium with plasma components and the flux limited diffusion algorithm with grey opacities is implemented for computing the radiation transfer. The transport coefficients and equation of state in this code are obtained from detailed particle population distribution calculation, which makes the numerical model is self-consistent. This code is systematically validated via the Sedov blast solutions and then used for lightning return stroke simulations with the peak current being 20 kA, 30 kA, and 40 kA, respectively. The results show that this numerical model consistent with observations and previous numerical results. The population distribution evolution and energy conservation problems are also discussed.« less

  5. TU-F-CAMPUS-I-02: Validation of a CT X-Ray Source Characterization Technique for Dose Computation Using An Anthropomorphic Thorax Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sommerville, M; Tambasco, M; Poirier, Y

    2015-06-15

    Purpose: To experimentally validate a rotational kV x-ray source characterization technique by computing CT dose in an anthropomorphic thorax phantom using an in-house dose computation algorithm (kVDoseCalc). Methods: The lateral variation in incident energy spectra of a GE Optima big bore CT scanner was found by measuring the HVL along the internal, full bow-tie filter axis. The HVL and kVp were used to generate the x-ray spectra using Spektr software, while beam fluence was derived by dividing the integral product of the spectra and in-air mass-energy absorption coefficients by in-air dose measurements along the bow-tie filter axis. Beams produced bymore » the GE Optima scanner were modeled at 80 and 140 kVp tube settings. kVDoseCalc calculates dose by solving the linear Boltzmann transport equation using a combination of deterministic and stochastic methods. Relative doses in an anthropomorphic thorax phantom (E2E SBRT Phantom) irradiated by the GE Optima scanner were measured using a (0.015 cc) PTW Freiburg ionization chamber, and compared to computations from kVDoseCalc. Results: The agreement in relative dose between dose computation and measurement for points of interest (POIs) within the primary path of the beam was within experimental uncertainty for both energies, however points outside the primary beam were not. The average absolute percent difference for POIs within the primary path of the beam was 1.37% and 5.16% for 80 and 140 kVp, respectively. The minimum and maximum absolute percent difference for both energies and all POIs within the primary path of the beam was 0.151% and 6.41%, respectively. Conclusion: The CT x-ray source characterization technique based on HVL measurements and kVp can be used to accurately compute CT dose in an anthropomorphic thorax phantom.« less

  6. An Adynamical, Graphical Approach to Quantum Gravity and Unification

    NASA Astrophysics Data System (ADS)

    Stuckey, W. M.; Silberstein, Michael; McDevitt, Timothy

    We use graphical field gradients in an adynamical, background independent fashion to propose a new approach to quantum gravity (QG) and unification. Our proposed reconciliation of general relativity (GR) and quantum field theory (QFT) is based on a modification of their graphical instantiations, i.e. Regge calculus and lattice gauge theory (LGT), respectively, which we assume are fundamental to their continuum counterparts. Accordingly, the fundamental structure is a graphical amalgam of space, time, and sources (in parlance of QFT) called a "space-time source element". These are fundamental elements of space, time, and sources, not source elements in space and time. The transition amplitude for a space-time source element is computed using a path integral with discrete graphical action. The action for a space-time source element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint (AGC) between sources, the space-time metric, and the energy-momentum content of the element, rather than a dynamical law for time-evolved entities. In this view, one manifestation of quantum gravity becomes evident when, for example, a single space-time source element spans adjoining simplices of the Regge calculus graph. Thus, energy conservation for the space-time source element includes contributions to the deficit angles between simplices. This idea is used to correct proper distance in the Einstein-de Sitter (EdS) cosmology model yielding a fit of the Union2 Compilation supernova data that matches ΛCDM without having to invoke accelerating expansion or dark energy. A similar modification to LGT results in an adynamical account of quantum interference.

  7. The effects of navigator distortion and noise level on interleaved EPI DWI reconstruction: a comparison between image- and k-space-based method.

    PubMed

    Dai, Erpeng; Zhang, Zhe; Ma, Xiaodong; Dong, Zijing; Li, Xuesong; Xiong, Yuhui; Yuan, Chun; Guo, Hua

    2018-03-23

    To study the effects of 2D navigator distortion and noise level on interleaved EPI (iEPI) DWI reconstruction, using either the image- or k-space-based method. The 2D navigator acquisition was adjusted by reducing its echo spacing in the readout direction and undersampling in the phase encoding direction. A POCS-based reconstruction using image-space sampling function (IRIS) algorithm (POCSIRIS) was developed to reduce the impact of navigator distortion. POCSIRIS was then compared with the original IRIS algorithm and a SPIRiT-based k-space algorithm, under different navigator distortion and noise levels. Reducing the navigator distortion can improve the reconstruction of iEPI DWI. The proposed POCSIRIS and SPIRiT-based algorithms are more tolerable to different navigator distortion levels, compared to the original IRIS algorithm. SPIRiT may be hindered by low SNR of the navigator. Multi-shot iEPI DWI reconstruction can be improved by reducing the 2D navigator distortion. Different reconstruction methods show variable sensitivity to navigator distortion or noise levels. Furthermore, the findings can be valuable in applications such as simultaneous multi-slice accelerated iEPI DWI and multi-slab diffusion imaging. © 2018 International Society for Magnetic Resonance in Medicine.

  8. Stellar model chromospheres. VIII - 70 Ophiuchi A /K0 V/ and Epsilon Eridani /K2 V/

    NASA Technical Reports Server (NTRS)

    Kelch, W. L.

    1978-01-01

    Model atmospheres for the late-type active-chromosphere dwarf stars 70 Oph A and Epsilon Eri are computed from high-resolution Ca II K line profiles as well as Mg II h and k line fluxes. A method is used which determines a plane-parallel homogeneous hydrostatic-equilibrium model of the upper photosphere and chromosphere which differs from theoretical models by lacking the constraint of radiative equilibrium (RE). The determinations of surface gravities, metallicities, and effective temperatures are discussed, and the computational methods, model atoms, atomic data, and observations are described. Temperature distributions for the two stars are plotted and compared with RE models for the adopted effective temperatures and gravities. The previously investigated T min/T eff vs. T eff relation is extended to Epsilon Eri and 70 Oph A, observed and computed Ca II K and Mg II h and k integrated emission fluxes are compared, and full tabulations are given for the proposed models. It is suggested that if less than half the observed Mg II flux for the two stars is lost in noise, the difference between an active-chromosphere star and a quiet-chromosphere star lies in the lower-chromospheric temperature gradient.

  9. Overview of NASA Lewis Research Center free-piston Stirling engine activities

    NASA Technical Reports Server (NTRS)

    Slaby, J. G.

    1984-01-01

    A generic free-piston Stirling technology project is being conducted to develop technologies generic to both space power and terrestrial heat pump applications in a cooperative, cost-shared effort. The generic technology effort includes extensive parametric testing of a 1 kW free-piston Stirling engine (RE-1000), development of a free-piston Stirling performance computer code, design and fabrication under contract of a hydraulic output modification for RE-1000 engine tests, and a 1000-hour endurance test, under contract, of a 3 kWe free-piston Stirling/alternator engine. A newly initiated space power technology feasibility demonstration effort addresses the capability of scaling a free-piston Stirling/alternator system to about 25 kWe; developing thermodynamic cycle efficiency or equal to 70 percent of Carnot at temperature ratios in the order of 1.5 to 2.0; achieving a power conversion unit specific weight of 6 kg/kWe; operating with noncontacting gas bearings; and dynamically balancing the system. Planned engine and component design and test efforts are described.

  10. Illumination estimation via thin-plate spline interpolation.

    PubMed

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  11. FPGA-based real-time swept-source OCT systems for B-scan live-streaming or volumetric imaging

    NASA Astrophysics Data System (ADS)

    Bandi, Vinzenz; Goette, Josef; Jacomet, Marcel; von Niederhäusern, Tim; Bachmann, Adrian H.; Duelk, Marcus

    2013-03-01

    We have developed a Swept-Source Optical Coherence Tomography (Ss-OCT) system with high-speed, real-time signal processing on a commercially available Data-Acquisition (DAQ) board with a Field-Programmable Gate Array (FPGA). The Ss-OCT system simultaneously acquires OCT and k-clock reference signals at 500MS/s. From the k-clock signal of each A-scan we extract a remap vector for the k-space linearization of the OCT signal. The linear but oversampled interpolation is followed by a 2048-point FFT, additional auxiliary computations, and a data transfer to a host computer for real-time, live-streaming of B-scan or volumetric C-scan OCT visualization. We achieve a 100 kHz A-scan rate by parallelization of our hardware algorithms, which run on standard and affordable, commercially available DAQ boards. Our main development tool for signal analysis as well as for hardware synthesis is MATLAB® with add-on toolboxes and 3rd-party tools.

  12. DVD-COOP: Innovative Conjunction Prediction Using Voronoi-filter based on the Dynamic Voronoi Diagram of 3D Spheres

    NASA Astrophysics Data System (ADS)

    Cha, J.; Ryu, J.; Lee, M.; Song, C.; Cho, Y.; Schumacher, P.; Mah, M.; Kim, D.

    Conjunction prediction is one of the critical operations in space situational awareness (SSA). For geospace objects, common algorithms for conjunction prediction are usually based on all-pairwise check, spatial hash, or kd-tree. Computational load is usually reduced through some filters. However, there exists a good chance of missing potential collisions between space objects. We present a novel algorithm which both guarantees no missing conjunction and is efficient to answer to a variety of spatial queries including pairwise conjunction prediction. The algorithm takes only O(k log N) time for N objects in the worst case to answer conjunctions where k is a constant which is linear to prediction time length. The proposed algorithm, named DVD-COOP (Dynamic Voronoi Diagram-based Conjunctive Orbital Object Predictor), is based on the dynamic Voronoi diagram of moving spherical balls in 3D space. The algorithm has a preprocessing which consists of two steps: The construction of an initial Voronoi diagram (taking O(N) time on average) and the construction of a priority queue for the events of topology changes in the Voronoi diagram (taking O(N log N) time in the worst case). The scalability of the proposed algorithm is also discussed. We hope that the proposed Voronoi-approach will change the computational paradigm in spatial reasoning among space objects.

  13. Computer-Aided Engineering (CAE) Tool Assessment/Development

    DTIC Science & Technology

    1990-09-01

    ionized, generating glow as well as acting as a plasma source.(see below) The code SOCRATES (Shuttle Orbiter Contamination Representation Accounting...DRIVE SAN JOSE , CA 95129-0000 SUE MCMURRAY LOCKHEED MISSLES & SPACE CO. BLDG. 157, DEPT. 81-63 11 LOCKHEED WAY SUNNYVALE, CA 94089-3504 K.D. MELLOT NASA

  14. Limb and gravity-darkening coefficients for the TESS satellite at several metallicities, surface gravities, and microturbulent velocities

    NASA Astrophysics Data System (ADS)

    Claret, A.

    2017-04-01

    Aims: We present new gravity and limb-darkening coefficients for a wide range of effective temperatures, gravities, metallicities, and microturbulent velocities. These coefficients can be used in many different fields of stellar physics as synthetic light curves of eclipsing binaries and planetary transits, stellar diameters, line profiles in rotating stars, and others. Methods: The limb-darkening coefficients were computed specifically for the photometric system of the space mission tess and were performed by adopting the least-square method. In addition, the linear and bi-parametric coefficients, by adopting the flux conservation method, are also available. On the other hand, to take into account the effects of tidal and rotational distortions, we computed the passband gravity-darkening coefficients y(λ) using a general differential equation in which we consider the effects of convection and of the partial derivative (∂lnI(λ) /∂lng)Teff. Results: To generate the limb-darkening coefficients we adopt two stellar atmosphere models: atlas (plane-parallel) and phoenix (spherical, quasi-spherical, and r-method). The specific intensity distribution was fitted using five approaches: linear, quadratic, square root, logarithmic, and a more general one with four terms. These grids cover together 19 metallicities ranging from 10-5 up to 10+1 solar abundances, 0 ≤ log g ≤ 6.0 and 1500 K ≤Teff ≤ 50 000 K. The calculations of the gravity-darkening coefficients were performed for all plane-parallel ATLAS models. Tables 2-29 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/600/A30

  15. Vitamin D endocrine system after short-term space flight

    NASA Technical Reports Server (NTRS)

    Rhoten, William B. (Principal Investigator); Sergeev, Igor N. (Principal Investigator)

    1996-01-01

    The exposure of the body to microgravity during space flight causes a series of well-documented changes in Ca(2+) metabolism, yet the cellular/molecular mechanisms leading to these changes are poorly understood. There is some evidence for microgravity-induced alterations in the vitamin D endocrine system, which is known to be primarily involved in the regulation of Ca(2+) metabolism. Vitamin D-dependent Ca(2+) binding proteins, or calbindins, are believed to have a significant role in maintaining cellular Ca(2+) homeostasis. We used immunocytochemical, biochemical and molecular approaches to analyze the expression of calbindin-D(sub 28k) and calbindin-D(sub 9k) in kidneys and intestines of rats flown for 9 days aboard the Spacelab 3 mission. The effects of microgravity on calbindins in rats in space vs. 'grounded' animals (synchronous Animal Enclosure Module controls and tail suspension controls) were compared. Exposure to microgravity resulted in a significant decrease in calbindin-D(sub 28k) content in kidneys and calbindin-D(sub 9k) in the intestine of flight and suspended animals, as measured by enzyme-linked immunosorbent assay (ELISA). Immunocytochemistry (ICC) in combination with quantitative computer image analysis was used to measure in situ the expression of calbindins in kidneys and intestine, and insulin in pancreas. There was a large decrease in the distal tubular cell-associated calbindin-D(sub 28k) and absorptive cell-associated calbindin-D(sub 9k) immunoreactivity in the space and suspension kidneys and intestine, as compared with matched ground controls. No consistent differences in pancreatic insulin immunoreactivity between space, suspension and ground controls was observed. There were significant correlations between results by quantitative ICC and ELISA. Western blot analysis showed no consistent changes in the low levels of intestinal and renal vitamin D receptors. These findings suggest that a decreased expression of calbindins after a short-term exposure to microgravity and modelled weightlessness, may affect cellular Ca(2+) homeostasis and contribute to Ca(2+) and bone metabolism disorders induced by space flight.

  16. USSR and Eastern Europe Scientific Abstracts, Geophysics, Astronomy and Space, Number 384

    DTIC Science & Technology

    1976-11-15

    been solemnly turned over to astronomers by its creators, representatives of the Leningrad Op- tical- Mechanical Combine, whose chief designer is B. K...purposeful regulation of climate. By learning how to evaluate precisely the dependence of climate on different factors it will be possible to exert a...bination with standard measurements of temperature and salinity , make it possible to compute the partial pressure of CO2 at the ocean surface. The computed

  17. Characterization of process-induced damage in Cu/low-k interconnect structure by microscopic infrared spectroscopy with polarized infrared light

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seki, Hirofumi, E-mail: Hirofumi-Seki@trc.toray.co.jp; Hashimoto, Hideki; Ozaki, Yukihiro

    Microscopic Fourier-transform infrared (FT-IR) spectra are measured for a Cu/low-k interconnect structure using polarized IR light for different widths of low-k spaces and Cu lines, and for different heights of Cu lines, on Si substrates. Although the widths of the Cu line and the low-k space are 70 nm each, considerably smaller than the wavelength of the IR light, the FT-IR spectra of the low-k film were obtained for the Cu/low-k interconnect structure. A suitable method was established for measuring the process-induced damage in a low-k film that was not detected by the TEM-EELS (Transmission Electron Microscope-Electron Energy-Loss Spectroscopy) using microscopicmore » IR polarized light. Based on the IR results, it was presumed that the FT-IR spectra mainly reflect the structural changes in the sidewalls of the low-k films for Cu/low-k interconnect structures, and the mechanism of generating process-induced damage involves the generation of Si-OH groups in the low-k film when the Si-CH{sub 3} bonds break during the fabrication processes. The Si-OH groups attract moisture and the OH peak intensity increases. It was concluded that the increase in the OH groups in the low-k film is a sensitive indicator of low-k damage. We achieved the characterization of the process-induced damage that was not detected by the TEM-EELS and speculated that the proposed method is applicable to interconnects with line and space widths of 70 nm/70 nm and on shorter scales of leading edge devices. The location of process-induced damage and its mechanism for the Cu/low-k interconnect structure were revealed via the measurement method.« less

  18. KSC-99pp1226

    NASA Image and Video Library

    1999-10-06

    Nancy Nichols, principal of South Lake Elementary School, Titusville, Fla., joins students in teacher Michelle Butler's sixth grade class who are unwrapping computer equipment donated by Kennedy Space Center. South Lake is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  19. CC2 oscillator strengths within the local framework for calculating excitation energies (LoFEx).

    PubMed

    Baudin, Pablo; Kjærgaard, Thomas; Kristensen, Kasper

    2017-04-14

    In a recent work [P. Baudin and K. Kristensen, J. Chem. Phys. 144, 224106 (2016)], we introduced a local framework for calculating excitation energies (LoFEx), based on second-order approximated coupled cluster (CC2) linear-response theory. LoFEx is a black-box method in which a reduced excitation orbital space (XOS) is optimized to provide coupled cluster (CC) excitation energies at a reduced computational cost. In this article, we present an extension of the LoFEx algorithm to the calculation of CC2 oscillator strengths. Two different strategies are suggested, in which the size of the XOS is determined based on the excitation energy or the oscillator strength of the targeted transitions. The two strategies are applied to a set of medium-sized organic molecules in order to assess both the accuracy and the computational cost of the methods. The results show that CC2 excitation energies and oscillator strengths can be calculated at a reduced computational cost, provided that the targeted transitions are local compared to the size of the molecule. To illustrate the potential of LoFEx for large molecules, both strategies have been successfully applied to the lowest transition of the bivalirudin molecule (4255 basis functions) and compared with time-dependent density functional theory.

  20. An Adaptive Mesh Algorithm: Mesh Structure and Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scannapieco, Anthony J.

    2016-06-21

    The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally sparse.« less

  1. Lessons learned in creating spacecraft computer systems: Implications for using Ada (R) for the space station

    NASA Technical Reports Server (NTRS)

    Tomayko, James E.

    1986-01-01

    Twenty-five years of spacecraft onboard computer development have resulted in a better understanding of the requirements for effective, efficient, and fault tolerant flight computer systems. Lessons from eight flight programs (Gemini, Apollo, Skylab, Shuttle, Mariner, Voyager, and Galileo) and three reserach programs (digital fly-by-wire, STAR, and the Unified Data System) are useful in projecting the computer hardware configuration of the Space Station and the ways in which the Ada programming language will enhance the development of the necessary software. The evolution of hardware technology, fault protection methods, and software architectures used in space flight in order to provide insight into the pending development of such items for the Space Station are reviewed.

  2. Dynamic Deployment Simulations of Inflatable Space Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.

    2005-01-01

    The feasibility of using Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method in LSDYNA to simulate the dynamic deployment of inflatable space structures is investigated. The CV and ALE methods were used to predict the inflation deployments of three folded tube configurations. The CV method was found to be a simple and computationally efficient method that may be adequate for modeling slow inflation deployment sine the inertia of the inflation gas can be neglected. The ALE method was found to be very computationally intensive since it involves the solving of three conservative equations of fluid as well as dealing with complex fluid structure interactions.

  3. A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2001-01-01

    An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.

  4. South Lake Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Nancy Nichols, principal of South Lake Elementary School, Titusville, Fla., joins students in teacher Michelle Butler's sixth grade class who are unwrapping computer equipment donated by Kennedy Space Center. South Lake is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  5. Cambridge Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Cambridge Elementary School, Cocoa, Fla., eagerly unwrap computer equipment donated by Kennedy Space Center. Cambridge is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. Behind the children is Jim Thurston, a school volunteer and retired employee of USBI, who shared in the project. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  6. Joint 6D k-q Space Compressed Sensing for Accelerated High Angular Resolution Diffusion MRI.

    PubMed

    Cheng, Jian; Shen, Dinggang; Basser, Peter J; Yap, Pew-Thian

    2015-01-01

    High Angular Resolution Diffusion Imaging (HARDI) avoids the Gaussian. diffusion assumption that is inherent in Diffusion Tensor Imaging (DTI), and is capable of characterizing complex white matter micro-structure with greater precision. However, HARDI methods such as Diffusion Spectrum Imaging (DSI) typically require significantly more signal measurements than DTI, resulting in prohibitively long scanning times. One of the goals in HARDI research is therefore to improve estimation of quantities such as the Ensemble Average Propagator (EAP) and the Orientation Distribution Function (ODF) with a limited number of diffusion-weighted measurements. A popular approach to this problem, Compressed Sensing (CS), affords highly accurate signal reconstruction using significantly fewer (sub-Nyquist) data points than required traditionally. Existing approaches to CS diffusion MRI (CS-dMRI) mainly focus on applying CS in the q-space of diffusion signal measurements and fail to take into consideration information redundancy in the k-space. In this paper, we propose a framework, called 6-Dimensional Compressed Sensing diffusion MRI (6D-CS-dMRI), for reconstruction of the diffusion signal and the EAP from data sub-sampled in both 3D k-space and 3D q-space. To our knowledge, 6D-CS-dMRI is the first work that applies compressed sensing in the full 6D k-q space and reconstructs the diffusion signal in the full continuous q-space and the EAP in continuous displacement space. Experimental results on synthetic and real data demonstrate that, compared with full DSI sampling in k-q space, 6D-CS-dMRI yields excellent diffusion signal and EAP reconstruction with low root-mean-square error (RMSE) using 11 times less samples (3-fold reduction in k-space and 3.7-fold reduction in q-space).

  7. The VASIMR[registered trademark] VF-200-1 ISS Experiment as a Laboratory for Astrophysics

    NASA Technical Reports Server (NTRS)

    Glover Tim W.; Squire, Jared P.; Longmier, Benjamin; Cassady, Leonard; Ilin, Andrew; Carter, Mark; Olsen, Chris S.; McCaskill, Greg; Diaz, Franklin Chang; Girimaji, Sharath; hide

    2010-01-01

    The VASIMR[R] Flight Experiment (VF-200-1) will be tested in space aboard the International Space Station (ISS) in about four years. It will consist of two 100 kW parallel plasma engines with opposite magnetic dipoles, resulting in a near zero-torque magnetic system. Electrical energy will come from ISS at low power level, be stored in batteries and used to fire the engine at 200 kW. The VF-200-1 project will provide a unique opportunity on the ISS National Laboratory for astrophysicists and space physicists to study the dynamic evolution of an expanding and reconnecting plasma loop. Here, we review the status of the project and discuss our current plans for computational modeling and in situ observation of a dynamic plasma loop on an experimental platform in low-Earth orbit. The VF-200-1 project is still in the early stages of development and we welcome new collaborators.

  8. Results from Testing Crew-Controlled Surface Telerobotics on the International Space Station

    NASA Technical Reports Server (NTRS)

    Bualat, Maria; Schreckenghost, Debra; Pacis, Estrellina; Fong, Terrence; Kalar, Donald; Beutter, Brent

    2014-01-01

    During Summer 2013, the Intelligent Robotics Group at NASA Ames Research Center conducted a series of tests to examine how astronauts in the International Space Station (ISS) can remotely operate a planetary rover. The tests simulated portions of a proposed lunar mission, in which an astronaut in lunar orbit would remotely operate a planetary rover to deploy a radio telescope on the lunar far side. Over the course of Expedition 36, three ISS astronauts remotely operated the NASA "K10" planetary rover in an analogue lunar terrain located at the NASA Ames Research Center in California. The astronauts used a "Space Station Computer" (crew laptop), a combination of supervisory control (command sequencing) and manual control (discrete commanding), and Ku-band data communications to command and monitor K10 for 11 hours. In this paper, we present and analyze test results, summarize user feedback, and describe directions for future research.

  9. The VASIMR® VF-200-1 ISS Experiment as a Laboratory for Astrophysics

    NASA Astrophysics Data System (ADS)

    Glover, T.; Squire, J. P.; Longmier, B. W.; Carter, M. D.; Ilin, A. V.; Cassady, L. D.; Olsen, C. S.; Chang Díaz, F.; McCaskill, G. E.; Bering, E. A.; Garrison, D.; Girimaji, S.; Araya, D.; Morin, L.; Shebalin, J. V.

    2010-12-01

    The VASIMR® Flight Experiment (VF-200-1) will be tested in space aboard the International Space Station (ISS) in about four years. It will consist of two 100 kW parallel plasma engines with opposite magnetic dipoles, resulting in a near zero-torque magnetic system. Electrical energy will come from ISS at low power level, be stored in batteries and used to fire the engine at 200 kW. The VF-200-1 project will provide a unique opportunity on the ISS National Laboratory for astrophysicists and space physicists to study the dynamic evolution of an expanding and reconnecting plasma loop. Here, we review the status of the project and discuss our current plans for computational modeling and in situ observation of a dynamic plasma loop on an experimental platform in low-Earth orbit. The VF-200-1 project is still in the early stages of development and we welcome new collaborators.

  10. K-->pipi amplitudes from lattice QCD with a light charm quark.

    PubMed

    Giusti, L; Hernández, P; Laine, M; Pena, C; Wennekers, J; Wittig, H

    2007-02-23

    We compute the leading-order low-energy constants of the DeltaS=1 effective weak Hamiltonian in the quenched approximation of QCD with up, down, strange, and charm quarks degenerate and light. They are extracted by comparing the predictions of finite-volume chiral perturbation theory with lattice QCD computations of suitable correlation functions carried out with quark masses ranging from a few MeV up to half of the physical strange mass. We observe a DeltaI=1/2 enhancement in this corner of the parameter space of the theory. Although matching with the experimental result is not observed for the DeltaI=1/2 amplitude, our computation suggests large QCD contributions to the physical DeltaI=1/2 rule in the GIM limit, and represents the first step to quantify the role of the charm-quark mass in K-->pipi amplitudes. The use of fermions with an exact chiral symmetry is an essential ingredient in our computation.

  11. Computation on collisionless steady-state plasma flow past a charged disk

    NASA Technical Reports Server (NTRS)

    Parker, L. W.

    1976-01-01

    A computer method is presented using the 'inside-out' approach, for predicting the structure of the disturbed zone near a moving body in space. The approach uses fewer simplifying assumptions than other available methods, and is applicable to large ranges of the values of body and plasma parameters. Two major advances concerning 3-dimensional bodies are that thermal motions of ions as well as of electrons are treated realistically by following their trajectories in the electric field, and the technique for achieving self-consistency is promising for very large bodies. Three sample solutions were obtained for a disk-shaped body, charged negatively to a potential 4kT/e. With ion Mach number 4, and equal ion and electron temperatures, the wakes of a relatively small body (radius 5 Debye lengths) and a relatively large body (radius 100 Debye lengths) both begin to fill up between 2 and 3 body radii downstream. For the large body there is in addition a potential well (about 6kT/e deep) behind the body. Increasing the ion Mach number to 8 for the large body causes the potential well to become wider and longer but not deeper. For the large body, the quasineutrality assumption is validated outside of a cone-shaped region in the very near wake. For the large as well as the small body, the disturbed zone behind the body extends transversely no more than 2 or 3 body radii, a result of significance for the design of spacecraft boom instrumentation.

  12. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  13. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  14. Numerical computation of diffusion on a surface.

    PubMed

    Schwartz, Peter; Adalsteinsson, David; Colella, Phillip; Arkin, Adam Paul; Onsum, Matthew

    2005-08-09

    We present a numerical method for computing diffusive transport on a surface derived from image data. Our underlying discretization method uses a Cartesian grid embedded boundary method for computing the volume transport in a region consisting of all points a small distance from the surface. We obtain a representation of this region from image data by using a front propagation computation based on level set methods for solving the Hamilton-Jacobi and eikonal equations. We demonstrate that the method is second-order accurate in space and time and is capable of computing solutions on complex surface geometries obtained from image data of cells.

  15. An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun

    2014-05-01

    Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.

  16. Three pillars for achieving quantum mechanical molecular dynamics simulations of huge systems: Divide-and-conquer, density-functional tight-binding, and massively parallel computation.

    PubMed

    Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi

    2016-08-05

    The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. An urban area minority outreach program for K-6 children in space science

    NASA Astrophysics Data System (ADS)

    Morris, P.; Garza, O.; Lindstrom, M.; Allen, J.; Wooten, J.; Sumners, C.; Obot, V.

    The Houston area has minority populations with significant school dropout rates. This is similar to other major cities in the United States and elsewhere in the world where there are significant minority populations from rural areas. The student dropout rates are associated in many instances with the absence of educational support opportuni- ties either from the school and/or from the family. This is exacerbated if the student has poor English language skills. To address this issue, a NASA minority university initiative enabled us to develop a broad-based outreach program that includes younger children and their parents at a primarily Hispanic inner city charter school. The pro- gram at the charter school was initiated by teaching computer skills to the older chil- dren, who in turn taught parents. The older children were subsequently asked to help teach a computer literacy class for mothers with 4-5 year old children. The computers initially intimidated the mothers as most had limited educational backgrounds and En- glish language skills. To practice their newly acquired computer skills and learn about space science, the mothers and their children were asked to pick a space project and investigate it using their computer skills. The mothers and their children decided to learn about black holes. The project included designing space suits for their children so that they could travel through space and observe black holes from a closer proxim- ity. The children and their mothers learned about computers and how to use them for educational purposes. In addition, they learned about black holes and the importance of space suits in protecting astronauts as they investigated space. The parents are proud of their children and their achievements. By including the parents in the program, they have a greater understanding of the importance of their children staying in school and the opportunities for careers in space science and technology. For more information on our overall program, the charter school and their other space science related activities, visit their web site, http://www.tccc-ryss.org/solarsys/solarmingrant.htm

  18. Radical O-O coupling reaction in diferrate-mediated water oxidation studied using multireference wave function theory.

    PubMed

    Kurashige, Yuki; Saitow, Masaaki; Chalupský, Jakub; Yanai, Takeshi

    2014-06-28

    The O-O (oxygen-oxygen) bond formation is widely recognized as a key step of the catalytic reaction of dioxygen evolution from water. Recently, the water oxidation catalyzed by potassium ferrate (K2FeO4) was investigated on the basis of experimental kinetic isotope effect analysis assisted by density functional calculations, revealing the intramolecular oxo-coupling mechanism within a di-iron(vi) intermediate, or diferrate [Sarma et al., J. Am. Chem. Soc., 2012, 134, 15371]. Here, we report a detailed examination of this diferrate-mediated O-O bond formation using scalable multireference electronic structure theory. High-dimensional correlated many-electron wave functions beyond the one-electron picture were computed using the ab initio density matrix renormalization group (DMRG) method along the O-O bond formation pathway. The necessity of using large active space arises from the description of complex electronic interactions and varying redox states both associated with two-center antiferromagnetic multivalent iron-oxo coupling. Dynamic correlation effects on top of the active space DMRG wave functions were additively accounted for by complete active space second-order perturbation (CASPT2) and multireference configuration interaction (MRCI) based methods, which were recently introduced by our group. These multireference methods were capable of handling the double shell effects in the extended active space treatment. The calculations with an active space of 36 electrons in 32 orbitals, which is far over conventional limitation, provide a quantitatively reliable prediction of potential energy profiles and confirmed the viability of the direct oxo coupling. The bonding nature of Fe-O and dual bonding character of O-O are discussed using natural orbitals.

  19. Respiratory motion resolved, self-gated 4D-MRI using Rotating Cartesian K-space (ROCK)

    PubMed Central

    Han, Fei; Zhou, Ziwu; Cao, Minsong; Yang, Yingli; Sheng, Ke; Hu, Peng

    2017-01-01

    Purpose To propose and validate a respiratory motion resolved, self-gated (SG) 4D-MRI technique to assess patient-specific breathing motion of abdominal organs for radiation treatment planning. Methods The proposed 4D-MRI technique was based on the balanced steady-state free-precession (bSSFP) technique and 3D k-space encoding. A novel ROtating Cartesian K-space (ROCK) reordering method was designed that incorporates repeatedly sampled k-space centerline as the SG motion surrogate and allows for retrospective k-space data binning into different respiratory positions based on the amplitude of the surrogate. The multiple respiratory-resolved 3D k-space data were subsequently reconstructed using a joint parallel imaging and compressed sensing method with spatial and temporal regularization. The proposed 4D-MRI technique was validated using a custom-made dynamic motion phantom and was tested in 6 healthy volunteers, in whom quantitative diaphragm and kidney motion measurements based on 4D-MRI images were compared with those based on 2D-CINE images. Results The 5-minute 4D-MRI scan offers high-quality volumetric images in 1.2×1.2×1.6mm3 and 8 respiratory positions, with good soft-tissue contrast. In phantom experiments with triangular motion waveform, the motion amplitude measurements based on 4D-MRI were 11.89% smaller than the ground truth, whereas a −12.5% difference was expected due to data binning effects. In healthy volunteers, the difference between the measurements based on 4D-MRI and the ones based on 2D-CINE were 6.2±4.5% for the diaphragm, 8.2±4.9% and 8.9±5.1% for the right and left kidney. Conclusion The proposed 4D-MRI technique could provide high resolution, high quality, respiratory motion resolved 4D images with good soft-tissue contrast and are free of the “stitching” artifacts usually seen on 4D-CT and 4D-MRI based on resorting 2D-CINE. It could be used to visualize and quantify abdominal organ motion for MRI-based radiation treatment planning. PMID:28133752

  20. Nonlinear power flow feedback control for improved stability and performance of airfoil sections

    DOEpatents

    Wilson, David G.; Robinett, III, Rush D.

    2013-09-03

    A computer-implemented method of determining the pitch stability of an airfoil system, comprising using a computer to numerically integrate a differential equation of motion that includes terms describing PID controller action. In one model, the differential equation characterizes the time-dependent response of the airfoil's pitch angle, .alpha.. The computer model calculates limit-cycles of the model, which represent the stability boundaries of the airfoil system. Once the stability boundary is known, feedback control can be implemented, by using, for example, a PID controller to control a feedback actuator. The method allows the PID controller gain constants, K.sub.I, K.sub.p, and K.sub.d, to be optimized. This permits operation closer to the stability boundaries, while preventing the physical apparatus from unintentionally crossing the stability boundaries. Operating closer to the stability boundaries permits greater power efficiencies to be extracted from the airfoil system.

  1. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE PAGES

    Huang, Hongying; Chen, Zheng; Li, Jin; ...

    2016-08-23

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  2. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hongying; Chen, Zheng; Li, Jin

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  3. Grassmann phase space methods for fermions. I. Mode theory

    NASA Astrophysics Data System (ADS)

    Dalton, B. J.; Jeffers, J.; Barnett, S. M.

    2016-07-01

    In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggest the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. The theory of Grassmann phase space methods for fermions based on separate modes is developed, showing how the distribution function is defined and used to determine quantum correlation functions, Fock state populations and coherences via Grassmann phase space integrals, how the Fokker-Planck equations are obtained and then converted into equivalent Ito equations for stochastic Grassmann variables. The fermion distribution function is an even Grassmann function, and is unique. The number of c-number Wiener increments involved is 2n2, if there are n modes. The situation is somewhat different to the bosonic c-number case where only 2 n Wiener increments are involved, the sign of the drift term in the Ito equation is reversed and the diffusion matrix in the Fokker-Planck equation is anti-symmetric rather than symmetric. The un-normalised B distribution is of particular importance for determining Fock state populations and coherences, and as pointed out by Plimak, Collett and Olsen, the drift vector in its Fokker-Planck equation only depends linearly on the Grassmann variables. Using this key feature we show how the Ito stochastic equations can be solved numerically for finite times in terms of c-number stochastic quantities. Averages of products of Grassmann stochastic variables at the initial time are also involved, but these are determined from the initial conditions for the quantum state. The detailed approach to the numerics is outlined, showing that (apart from standard issues in such numerics) numerical calculations for Grassmann phase space theories of fermion systems could be carried out without needing to represent Grassmann phase space variables on the computer, and only involving processes using c-numbers. We compare our approach to that of Plimak, Collett and Olsen and show that the two approaches differ. As a simple test case we apply the B distribution theory and solve the Ito stochastic equations to demonstrate coupling between degenerate Cooper pairs in a four mode fermionic system involving spin conserving interactions between the spin 1 / 2 fermions, where modes with momenta - k , + k-each associated with spin up, spin down states, are involved.

  4. New horizons in mouse immunoinformatics: reliable in silico prediction of mouse class I histocompatibility major complex peptide binding affinity.

    PubMed

    Hattotuwagama, Channa K; Guan, Pingping; Doytchinova, Irini A; Flower, Darren R

    2004-11-21

    Quantitative structure-activity relationship (QSAR) analysis is a main cornerstone of modern informatic disciplines. Predictive computational models, based on QSAR technology, of peptide-major histocompatibility complex (MHC) binding affinity have now become a vital component of modern day computational immunovaccinology. Historically, such approaches have been built around semi-qualitative, classification methods, but these are now giving way to quantitative regression methods. The additive method, an established immunoinformatics technique for the quantitative prediction of peptide-protein affinity, was used here to identify the sequence dependence of peptide binding specificity for three mouse class I MHC alleles: H2-D(b), H2-K(b) and H2-K(k). As we show, in terms of reliability the resulting models represent a significant advance on existing methods. They can be used for the accurate prediction of T-cell epitopes and are freely available online ( http://www.jenner.ac.uk/MHCPred).

  5. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    NASA Astrophysics Data System (ADS)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  6. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    NASA Astrophysics Data System (ADS)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  7. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  8. GAP Noise Computation By The CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2001-01-01

    A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

  9. Calculation of Free-Atom Fractions in Hydrocarbon-Fueled Rocket Engine Plume

    NASA Technical Reports Server (NTRS)

    Verma, Satyajit

    2006-01-01

    Free atom fractions (Beta) of nine elements are calculated in the exhaust plume of CH4- oxygen and RP-1-oxygen fueled rocket engines using free energy minimization method. The Chemical Equilibrium and Applications (CEA) computer program developed by the Glenn Research Center, NASA is used for this purpose. Data on variation of Beta in both fuels as a function of temperature (1600 K - 3100 K) and oxygen to fuel ratios (1.75 to 2.25 by weight) is presented in both tabular and graphical forms. Recommendation is made for the Beta value for a tenth element, Palladium. The CEA computer code was also run to compare with experimentally determined Beta values reported in literature for some of these elements. A reasonable agreement, within a factor of three, between the calculated and reported values is observed. Values reported in this work will be used as a first approximation for pilot rocket engine testing studies at the Stennis Space Center for at least six elements Al, Ca, Cr, Cu, Fe and Ni - until experimental values are generated. The current estimates will be improved when more complete thermodynamic data on the remaining four elements Ag, Co, Mn and Pd are added to the database. A critique of the CEA code is also included.

  10. Development of numerical methods for overset grids with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1995-01-01

    Algorithms and computer code developments were performed for the overset grid approach to solving computational fluid dynamics problems. The techniques developed are applicable to compressible Navier-Stokes flow for any general complex configurations. The computer codes developed were tested on different complex configurations with the Space Shuttle launch vehicle configuration as the primary test bed. General, efficient and user-friendly codes were produced for grid generation, flow solution and force and moment computation.

  11. Computer-operated analytical platform for the determination of nutrients in hydroponic systems.

    PubMed

    Rius-Ruiz, F Xavier; Andrade, Francisco J; Riu, Jordi; Rius, F Xavier

    2014-03-15

    Hydroponics is a water, energy, space, and cost efficient system for growing plants in constrained spaces or land exhausted areas. Precise control of hydroponic nutrients is essential for growing healthy plants and producing high yields. In this article we report for the first time on a new computer-operated analytical platform which can be readily used for the determination of essential nutrients in hydroponic growing systems. The liquid-handling system uses inexpensive components (i.e., peristaltic pump and solenoid valves), which are discretely computer-operated to automatically condition, calibrate and clean a multi-probe of solid-contact ion-selective electrodes (ISEs). These ISEs, which are based on carbon nanotubes, offer high portability, robustness and easy maintenance and storage. With this new computer-operated analytical platform we performed automatic measurements of K(+), Ca(2+), NO3(-) and Cl(-) during tomato plants growth in order to assure optimal nutritional uptake and tomato production. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  13. Reactive collisions for NO(2Π) + N(4S) at temperatures relevant to the hypersonic flight regime.

    PubMed

    Denis-Alpizar, Otoniel; Bemish, Raymond J; Meuwly, Markus

    2017-01-18

    The NO(X 2 Π) + N( 4 S) reaction which occurs entirely in the triplet manifold of N 2 O is investigated using quasiclassical trajectories and quantum simulations. Fully-dimensional potential energy surfaces for the 3 A' and 3 A'' states are computed at the MRCI+Q level of theory and are represented using a reproducing kernel Hilbert space. The N-exchange and N 2 -formation channels are followed by using the multi-state adiabatic reactive molecular dynamics method. Up to 5000 K these reactions occur predominantly on the N 2 O 3 A'' surface. However, for higher temperatures the contributions of the 3 A' and 3 A'' states are comparable and the final state distributions are far from thermal equilibrium. From the trajectory simulations a new set of thermal rate coefficients of up to 20 000 K is determined. Comparison of the quasiclassical trajectory and quantum simulations shows that a classical description is a good approximation as determined from the final state analysis.

  14. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    NASA Astrophysics Data System (ADS)

    Zhu, Yan-Chun; Du, Jiang; Yang, Wen-Chao; Duan, Chai-Jie; Wang, Hao-Yu; Gao, Song; Bao, Shang-Lian

    2014-03-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts.

  15. Numerical studies of the Bethe-Salpeter equation for a two-fermion bound state

    NASA Astrophysics Data System (ADS)

    de Paula, W.; Frederico, T.; Salmè, G.; Viviani, M.

    2018-03-01

    Some recent advances on the solution of the Bethe-Salpeter equation (BSE) for a two-fermion bound system directly in Minkowski space are presented. The calculations are based on the expression of the Bethe-Salpeter amplitude in terms of the so-called Nakanishi integral representation and on the light-front projection (i.e. the integration of the light-front variable k - = k 0 - k 3). The latter technique allows for the analytically exact treatment of the singularities plaguing the two-fermion BSE in Minkowski space. The good agreement observed between our results and those obtained using other existing numerical methods, based on both Minkowski and Euclidean space techniques, fully corroborate our analytical treatment.

  16. Computing the scalar field couplings in 6D supergravity

    NASA Astrophysics Data System (ADS)

    Saidi, El Hassan

    2008-11-01

    Using non-chiral supersymmetry in 6D space-time, we compute the explicit expression of the metric the scalar manifold SO(1,1)×{SO(4,20)}/{SO(4)×SO(20)} of the ten-dimensional type IIA superstring on generic K3. We consider as well the scalar field self-couplings in the general case where the non-chiral 6D supergravity multiplet is coupled to generic n vector supermultiplets with moduli space SO(1,1)×{SO(4,n)}/{SO(4)×SO(n)}. We also work out a dictionary giving a correspondence between hyper-Kähler geometry and the Kähler geometry of the Coulomb branch of 10D type IIA on Calabi-Yau threefolds. Others features are also discussed.

  17. SU-G-IeP3-04: Effective Dose Measurements in Fast Kvp Switch Dual Energy Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raudabaugh, J; Moore, B; Nguyen, G

    2016-06-15

    Purpose: The objective of this study was two-fold: (a) to test a new approach to approximating organ dose by using the effective energy of the combined 80kV/140kV beam in dual-energy (DE) computed tomography (CT), and (b) to derive the effective dose (ED) in the abdomen-pelvis protocol in DECT. Methods: A commercial dual energy CT scanner was employed using a fast-kV switch abdomen/pelvis protocol alternating between 80 kV and 140 kV. MOSFET detectors were used for organ dose measurements. First, an experimental validation of the dose equivalency between MOSFET and ion chamber (as a gold standard) was performed using a CTDImore » phantom. Second, the ED of DECT scans was measured using MOSFET detectors and an anthropomorphic phantom. For ED calculations, an abdomen/pelvis scan was used using ICRP 103 tissue weighting factors; ED was also computed using the AAPM Dose Length Product (DLP) method and compared to the MOSFET value. Results: The effective energy was determined as 42.9 kV under the combined beam from half-value layer (HVL) measurement. ED for the dual-energy scan was calculated as 16.49 ± 0.04 mSv by the MOSFET method and 14.62 mSv by the DLP method. Conclusion: Tissue dose in the center of the CTDI body phantom was 1.71 ± 0.01 cGy (ion chamber) and 1.71 ± 0.06 (MOSFET) respectively; this validated the use of effective energy method for organ dose estimation. ED from the abdomen-pelvis scan was calculated as 16.49 ± 0.04 mSv by MOSFET and 14.62 mSv by the DLP method; this suggests that the DLP method provides a reasonable approximation to the ED.« less

  18. An Efficient Algorithm for TUCKALS3 on Data with Large Numbers of Observation Units.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; And Others

    1992-01-01

    A modification of the TUCKALS3 algorithm is proposed that handles three-way arrays of order I x J x K for any I. The reduced work space needed for storing data and increased execution speed make the modified algorithm very suitable for use on personal computers. (SLD)

  19. Visual Analytics of integrated Data Systems for Space Weather Purposes

    NASA Astrophysics Data System (ADS)

    Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo

    Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.

  20. Transport properties of N2 gas at cryogenic temperatures. [computation of viscosity and thermal conductivity

    NASA Technical Reports Server (NTRS)

    Pearson, W. E.

    1974-01-01

    The viscosity and thermal conductivity of nitrogen gas for the temperature range 5 K - 135 K have been computed from the second Chapman-Enskog approximation. Quantum effects, which become appreciable at the lower temperatures, are included by utilizing collision integrals based on quantum theory. A Lennard-Jones (12-6) potential was assumed. The computations yield viscosities about 20 percent lower than those predicted for the high end of this temperature range by the method of corresponding states, but the agreement is excellent when the computed values are compared with existing experimental data.

  1. Computational Exploration of a Protein Receptor Binding Space with Student Proposed Peptide Ligands

    ERIC Educational Resources Information Center

    King, Matthew D.; Phillips, Paul; Turner, Matthew W.; Katz, Michael; Lew, Sarah; Bradburn, Sarah; Andersen, Tim; McDougal, Owen M.

    2016-01-01

    Computational molecular docking is a fast and effective "in silico" method for the analysis of binding between a protein receptor model and a ligand. The visualization and manipulation of protein to ligand binding in three-dimensional space represents a powerful tool in the biochemistry curriculum to enhance student learning. The…

  2. Computational methods in the exploration of the classical and statistical mechanics of celestial scale strings: Rotating Space Elevators

    NASA Astrophysics Data System (ADS)

    Knudsen, Steven; Golubovic, Leonardo

    2015-04-01

    With the advent of ultra-strong materials, the Space Elevator has changed from science fiction to real science. We discuss computational and theoretical methods we developed to explore classical and statistical mechanics of rotating Space Elevators (RSE). An RSE is a loopy string reaching deep into outer space. The floppy RSE loop executes a motion which is nearly a superposition of two rotations: geosynchronous rotation around the Earth, and yet another faster rotational motion of the string which goes on around a line perpendicular to the Earth at its equator. Strikingly, objects sliding along the RSE loop spontaneously oscillate between two turning points, one of which is close to the Earth (starting point) whereas the other one is deeply in the outer space. The RSE concept thus solves a major problem in space elevator science which is how to supply energy to the climbers moving along space elevator strings. The exploration of the dynamics of a floppy string interacting with objects sliding along it has required development of novel finite element algorithms described in this presentation. We thank Prof. Duncan Lorimer of WVU for kindly providing us access to his computational facility.

  3. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  4. A first-order k-space model for elastic wave propagation in heterogeneous media.

    PubMed

    Firouzi, K; Cox, B T; Treeby, B E; Saffari, N

    2012-09-01

    A pseudospectral model of linear elastic wave propagation is described based on the first order stress-velocity equations of elastodynamics. k-space adjustments to the spectral gradient calculations are derived from the dyadic Green's function solution to the second-order elastic wave equation and used to (a) ensure the solution is exact for homogeneous wave propagation for timesteps of arbitrarily large size, and (b) also allows larger time steps without loss of accuracy in heterogeneous media. The formulation in k-space allows the wavefield to be split easily into compressional and shear parts. A perfectly matched layer (PML) absorbing boundary condition was developed to effectively impose a radiation condition on the wavefield. The staggered grid, which is essential for accurate simulations, is described, along with other practical details of the implementation. The model is verified through comparison with exact solutions for canonical examples and further examples are given to show the efficiency of the method for practical problems. The efficiency of the model is by virtue of the reduced point-per-wavelength requirement, the use of the fast Fourier transform (FFT) to calculate the gradients in k space, and larger time steps made possible by the k-space adjustments.

  5. SU-F-J-158: Respiratory Motion Resolved, Self-Gated 4D-MRI Using Rotating Cartesian K-Space Sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, F; Zhou, Z; Yang, Y

    Purpose: Dynamic MRI has been used to quantify respiratory motion of abdominal organs in radiation treatment planning. Many existing 4D-MRI methods based on 2D acquisitions suffer from limited slice resolution and additional stitching artifacts when evaluated in 3D{sup 1}. To address these issues, we developed a 4D-MRI (3D dynamic) technique with true 3D k-space encoding and respiratory motion self-gating. Methods: The 3D k-space was acquired using a Rotating Cartesian K-space (ROCK) pattern, where the Cartesian grid was reordered in a quasi-spiral fashion with each spiral arm rotated using golden angle{sup 2}. Each quasi-spiral arm started with the k-space center-line, whichmore » were used as self-gating{sup 3} signal for respiratory motion estimation. The acquired k-space data was then binned into 8 respiratory phases and the golden angle ensures a near-uniform k-space sampling in each phase. Finally, dynamic 3D images were reconstructed using the ESPIRiT technique{sup 4}. 4D-MRI was performed on 6 healthy volunteers, using the following parameters (bSSFP, Fat-Sat, TE/TR=2ms/4ms, matrix size=500×350×120, resolution=1×1×1.2mm, TA=5min, 8 respiratory phases). Supplemental 2D real-time images were acquired in 9 different planes. Dynamic locations of the diaphragm dome and left kidney were measured from both 4D and 2D images. The same protocol was also performed on a MRI-compatible motion phantom where the motion was programmed with different amplitude (10–30mm) and frequency (3–10/min). Results: High resolution 4D-MRI were obtained successfully in 5 minutes. Quantitative motion measurements from 4D-MRI agree with the ones from 2D CINE (<5% error). The 4D images are free of the stitching artifacts and their near-isotropic resolution facilitates 3D visualization and segmentation of abdominal organs such as the liver, kidney and pancreas. Conclusion: Our preliminary studies demonstrated a novel ROCK 4D-MRI technique with true 3D k-space encoding and respiratory motion self-gating. The technique leads to high-resolution and artifacts-free 4D images for improved abdominal organ motion studies. K.S acknowledges funding support from NIH R01CA188300.« less

  6. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.

  7. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    NASA Astrophysics Data System (ADS)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  8. Computational fluid dynamics investigation of turbulence models for non-newtonian fluid flow in anaerobic digesters.

    PubMed

    Wu, Binxin

    2010-12-01

    In this paper, 12 turbulence models for single-phase non-newtonian fluid flow in a pipe are evaluated by comparing the frictional pressure drops obtained from computational fluid dynamics (CFD) with those from three friction factor correlations. The turbulence models studied are (1) three high-Reynolds-number k-ε models, (2) six low-Reynolds-number k-ε models, (3) two k-ω models, and (4) the Reynolds stress model. The simulation results indicate that the Chang-Hsieh-Chen version of the low-Reynolds-number k-ε model performs better than the other models in predicting the frictional pressure drops while the standard k-ω model has an acceptable accuracy and a low computing cost. In the model applications, CFD simulation of mixing in a full-scale anaerobic digester with pumped circulation is performed to propose an improvement in the effective mixing standards recommended by the U.S. EPA based on the effect of rheology on the flow fields. Characterization of the velocity gradient is conducted to quantify the growth or breakage of an assumed floc size. Placement of two discharge nozzles in the digester is analyzed to show that spacing two nozzles 180° apart with each one discharging at an angle of 45° off the wall is the most efficient. Moreover, the similarity rules of geometry and mixing energy are checked for scaling up the digester.

  9. Comparison of liquid rocket engine base region heat flux computations using three turbulence models

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Griffith, Dwaine O., II; Prendergast, Maurice J.; Seaford, C. M.

    1993-01-01

    The flow in the base region of launch vehicles is characterized by flow separation, flow reversals, and reattachment. Computation of the convective heat flux in the base region and on the nozzle external surface of Space Shuttle Main Engine and Space Transportation Main Engine (STME) is an important part of defining base region thermal environments. Several turbulence models were incorporated in a CFD code and validated for flow and heat transfer computations in the separated and reattaching regions associated with subsonic and supersonic flows over backward facing steps. Heat flux computations in the base region of a single STME engine and a single S1C engine were performed using three different wall functions as well as a renormalization-group based k-epsilon model. With the very limited data available, the computed values are seen to be of the right order of magnitude. Based on the validation comparisons, it is concluded that all the turbulence models studied have predicted the reattachment location and the velocity profiles at various axial stations downstream of the step very well.

  10. Introduction to the Space Physics Analysis Network (SPAN)

    NASA Technical Reports Server (NTRS)

    Green, J. L. (Editor); Peters, D. J. (Editor)

    1985-01-01

    The Space Physics Analysis Network or SPAN is emerging as a viable method for solving an immediate communication problem for the space scientist. SPAN provides low-rate communication capability with co-investigators and colleagues, and access to space science data bases and computational facilities. The SPAN utilizes up-to-date hardware and software for computer-to-computer communications allowing binary file transfer and remote log-on capability to over 25 nationwide space science computer systems. SPAN is not discipline or mission dependent with participation from scientists in such fields as magnetospheric, ionospheric, planetary, and solar physics. Basic information on the network and its use are provided. It is anticipated that SPAN will grow rapidly over the next few years, not only from the standpoint of more network nodes, but as scientists become more proficient in the use of telescience, more capability will be needed to satisfy the demands.

  11. Space station thermal control surfaces. [space radiators

    NASA Technical Reports Server (NTRS)

    Maag, C. R.; Millard, J. M.; Jeffery, J. A.; Scott, R. R.

    1979-01-01

    Mission planning documents were used to analyze the radiator design and thermal control surface requirements for both space station and 25-kW power module, to analyze the missions, and to determine the thermal control technology needed to satisfy both sets of requirements. Parameters such as thermal control coating degradation, vehicle attitude, self eclipsing, variation in solar constant, albedo, and Earth emission are considered. Four computer programs were developed which provide a preliminary design and evaluation tool for active radiator systems in LEO and GEO. Two programs were developed as general programs for space station analysis. Both types of programs find the radiator-flow solution and evaluate external heat loads in the same way. Fortran listings are included.

  12. Zero-fringe demodulation method based on location-dependent birefringence dispersion in polarized low-coherence interferometry.

    PubMed

    Wang, Shuang; Liu, Tiegen; Jiang, Junfeng; Liu, Kun; Yin, Jinde; Qin, Zunqi; Zou, Shengliang

    2014-04-01

    We present a high precision and fast speed demodulation method for a polarized low-coherence interferometer with location-dependent birefringence dispersion. Based on the characteristics of location-dependent birefringence dispersion and five-step phase-shifting technology, the method accurately retrieves the peak position of zero-fringe at the central wavelength, which avoids the fringe order ambiguity. The method processes data only in the spatial domain and reduces the computational load greatly. We successfully demonstrated the effectiveness of the proposed method in an optical fiber Fabry-Perot barometric pressure sensing experiment system. Measurement precision of 0.091 kPa was realized in the pressure range of 160 kPa, and computation time was improved by 10 times compared to the traditional phase-based method that requires Fourier transform operation.

  13. KSC-99pp1228

    NASA Image and Video Library

    1999-10-06

    Children at Cambridge Elementary School, Cocoa, Fla., eagerly unwrap computer equipment donated by Kennedy Space Center. Cambridge is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. Behind the children is Jim Thurston, a school volunteer and retired employee of USBI, who shared in the project. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  14. Sources of spurious force oscillations from an immersed boundary method for moving-body problems

    NASA Astrophysics Data System (ADS)

    Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo

    2011-04-01

    When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.

  15. Analysis and Down Select of Flow Passages for Thermal Hydraulic Testing of a SNAP Derived Reactor

    NASA Technical Reports Server (NTRS)

    Godfroy, T. J.; Sadasivan, P.; Masterson, S.

    2007-01-01

    As past of the Vision for Space Exploration, man will return to the moon. To enable safe and productive time on the lunar surface will require adequate power resources. To provide the needed power and to give mission planners all landing site possibilities, including a permanently dark crater, a nuclear reactor provides the most options. Designed to be l00kWt providing approx. 25kWe this power plants would be very effective in delivering dependable, site non-specific power to crews or robotic missions on the lunar surface. An affordable reference reactor based upon the successful SNAP program of the 1960's and early 1970's has been designed by Los Alamos National Laboratory that will meet such a requirement. Considering current funding, environmental, and schedule limitations this lunar surface power reactor will be tested using non-nuclear simulators to simulate the heat from fission reactions. Currently a 25kWe surface power SNAP derivative reactor is in the early process of design and testing with collaboration between Los Alamos National Laboratory, Idaho National Laboratory, Glenn Research Center, Marshall Space Flight Center, and Sandia National Laboratory to ensure that this new design is affordable and can be tested using non-nuclear methods as have proven so effective in the past. This paper will discuss the study and down selection of a flow passage concept for a approx. 25kWe lunar surface power reactor. Several different flow passages designs were evaluated using computational fluid dynamics to determine pressure drop and a structural assessment to consider thermal and stress of the passage walls. The reactor design basis conditions are discussed followed by passage problem setup and results for each concept. A recommendation for passage design is made with rationale for selection.

  16. Allosteric activation via kinetic control: Potassium accelerates a conformational change in IMP dehydrogenase†

    PubMed Central

    Riera, Thomas V.; Zheng, Lianqing; Josephine, Helen R.; Min, Donghong; Yang, Wei; Hedstrom, Lizbeth

    2011-01-01

    Allosteric activators are generally believed to shift the equilibrium distribution of enzyme conformations to favor a catalytically productive structure; the kinetics of conformational exchange is seldom addressed. Several observations suggested that the usual allosteric mechanism might not apply to the activation of IMP dehydrogenase (IMPDH) by monovalent cations. Therefore we investigated the mechanism of K+ activation in IMPDH by delineating the kinetic mechanism in the absence of monovalent cations. Surprisingly, the K+-dependence of kcat derives from the rate of flap closure, which increases by ≥65-fold in the presence of K+. We performed both alchemical free energy simulations and potential of mean force calculations using the orthogonal space random walk strategy to computationally analyze how K+ accelerates this conformational change. The simulations recapitulate the preference of IMPDH for K+, validating the computational models. When K+ is replaced with a dummy ion, the residues of the K+ binding site relax into ordered secondary structure, creating a barrier to conformational exchange. K+ mobilizes these residues by providing alternate interactions for the main chain carbonyls. Potential of mean force calculations indicate that K+ changes the shape of the energy well, shrinking the reaction coordinate by shifting the closed conformation toward the open state. This work suggests that allosteric regulation can be under kinetic as well as thermodynamic control. PMID:21870820

  17. White blood cell segmentation by color-space-based k-means clustering.

    PubMed

    Zhang, Congcong; Xiao, Xiaoyan; Li, Xiaomei; Chen, Ying-Jie; Zhen, Wu; Chang, Jun; Zheng, Chengyun; Liu, Zhi

    2014-09-01

    White blood cell (WBC) segmentation, which is important for cytometry, is a challenging issue because of the morphological diversity of WBCs and the complex and uncertain background of blood smear images. This paper proposes a novel method for the nucleus and cytoplasm segmentation of WBCs for cytometry. A color adjustment step was also introduced before segmentation. Color space decomposition and k-means clustering were combined for segmentation. A database including 300 microscopic blood smear images were used to evaluate the performance of our method. The proposed segmentation method achieves 95.7% and 91.3% overall accuracy for nucleus segmentation and cytoplasm segmentation, respectively. Experimental results demonstrate that the proposed method can segment WBCs effectively with high accuracy.

  18. Information Flow Between Resting-State Networks.

    PubMed

    Diez, Ibai; Erramuzpe, Asier; Escudero, Iñaki; Mateos, Beatriz; Cabrera, Alberto; Marinazzo, Daniele; Sanz-Arigita, Ernesto J; Stramaglia, Sebastiano; Cortes Diaz, Jesus M

    2015-11-01

    The resting brain dynamics self-organize into a finite number of correlated patterns known as resting-state networks (RSNs). It is well known that techniques such as independent component analysis can separate the brain activity at rest to provide such RSNs, but the specific pattern of interaction between RSNs is not yet fully understood. To this aim, we propose here a novel method to compute the information flow (IF) between different RSNs from resting-state magnetic resonance imaging. After hemodynamic response function blind deconvolution of all voxel signals, and under the hypothesis that RSNs define regions of interest, our method first uses principal component analysis to reduce dimensionality in each RSN to next compute IF (estimated here in terms of transfer entropy) between the different RSNs by systematically increasing k (the number of principal components used in the calculation). When k=1, this method is equivalent to computing IF using the average of all voxel activities in each RSN. For k≥1, our method calculates the k multivariate IF between the different RSNs. We find that the average IF among RSNs is dimension dependent, increasing from k=1 (i.e., the average voxel activity) up to a maximum occurring at k=5 and to finally decay to zero for k≥10. This suggests that a small number of components (close to five) is sufficient to describe the IF pattern between RSNs. Our method--addressing differences in IF between RSNs for any generic data--can be used for group comparison in health or disease. To illustrate this, we have calculated the inter-RSN IF in a data set of Alzheimer's disease (AD) to find that the most significant differences between AD and controls occurred for k=2, in addition to AD showing increased IF w.r.t. The spatial localization of the k=2 component, within RSNs, allows the characterization of IF differences between AD and controls.

  19. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  20. Needle position estimation from sub-sampled k-space data for MRI-guided interventions

    NASA Astrophysics Data System (ADS)

    Schmitt, Sebastian; Choli, Morwan; Overhoff, Heinrich M.

    2015-03-01

    MRI-guided interventions have gained much interest. They profit from intervention synchronous data acquisition and image visualization. Due to long data acquisition durations, ergonomic limitations may occur. For a trueFISP MRI-data acquisition sequence, a time sparing sub-sampling strategy has been developed that is adapted to amagnetic needle detection. A symmetrical and contrast rich susceptibility needle artifact, i.e. an approximately rectangular gray scale profile is assumed. The 1-D-Fourier transformed of a rectangular function is a sinc-function. Its periodicity is exploited by sampling only along a few orthogonal trajectories in k-space. Because a needle moves during intervention, its tip region resembles a rectangle in a time-difference image that is reconstructed from such sub-sampled k-spaces acquired at different time stamps. In different phantom experiments, a needle was pushed forward along a reference trajectory, which was determined from a needle holders geometric parameters. In addition, the trajectory of the needle tip was estimated by the method described above. Only ca. 4 to 5% of the entire k-space data was used for needle tip estimation. The misalignment of needle orientation and needle tip position, i.e. the differences between reference and estimated values, is small and even in its worst case less than 2 mm. The results show that the method is applicable under nearly real conditions. Next steps are addressed to the validation of the method for clinical data.

  1. Groupwise registration of cardiac perfusion MRI sequences using normalized mutual information in high dimension

    NASA Astrophysics Data System (ADS)

    Hamrouni, Sameh; Rougon, Nicolas; Pr"teux, Françoise

    2011-03-01

    In perfusion MRI (p-MRI) exams, short-axis (SA) image sequences are captured at multiple slice levels along the long-axis of the heart during the transit of a vascular contrast agent (Gd-DTPA) through the cardiac chambers and muscle. Compensating cardio-thoracic motions is a requirement for enabling computer-aided quantitative assessment of myocardial ischaemia from contrast-enhanced p-MRI sequences. The classical paradigm consists of registering each sequence frame on a reference image using some intensity-based matching criterion. In this paper, we introduce a novel unsupervised method for the spatio-temporal groupwise registration of cardiac p-MRI exams based on normalized mutual information (NMI) between high-dimensional feature distributions. Here, local contrast enhancement curves are used as a dense set of spatio-temporal features, and statistically matched through variational optimization to a target feature distribution derived from a registered reference template. The hard issue of probability density estimation in high-dimensional state spaces is bypassed by using consistent geometric entropy estimators, allowing NMI to be computed directly from feature samples. Specifically, a computationally efficient kth-nearest neighbor (kNN) estimation framework is retained, leading to closed-form expressions for the gradient flow of NMI over finite- and infinite-dimensional motion spaces. This approach is applied to the groupwise alignment of cardiac p-MRI exams using a free-form Deformation (FFD) model for cardio-thoracic motions. Experiments on simulated and natural datasets suggest its accuracy and robustness for registering p-MRI exams comprising more than 30 frames.

  2. Validation of a Node-Centered Wall Function Model for the Unstructured Flow Code FUN3D

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee; Vasta, Veer N.; White, Jeffery

    2015-01-01

    In this paper, the implementation of two wall function models in the Reynolds averaged Navier-Stokes (RANS) computational uid dynamics (CFD) code FUN3D is described. FUN3D is a node centered method for solving the three-dimensional Navier-Stokes equations on unstructured computational grids. The first wall function model, based on the work of Knopp et al., is used in conjunction with the one-equation turbulence model of Spalart-Allmaras. The second wall function model, also based on the work of Knopp, is used in conjunction with the two-equation k-! turbulence model of Menter. The wall function models compute the wall momentum and energy flux, which are used to weakly enforce the wall velocity and pressure flux boundary conditions in the mean flow momentum and energy equations. These wall conditions are implemented in an implicit form where the contribution of the wall function model to the Jacobian are also included. The boundary conditions of the turbulence transport equations are enforced explicitly (strongly) on all solid boundaries. The use of the wall function models is demonstrated on four test cases: a at plate boundary layer, a subsonic di user, a 2D airfoil, and a 3D semi-span wing. Where possible, different near-wall viscous spacing tactics are examined. Iterative residual convergence was obtained in most cases. Solution results are compared with theoretical and experimental data for several variations of grid spacing. In general, very good comparisons with data were achieved.

  3. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  4. The Autoregressive Method: A Method of Approximating and Estimating Positive Functions

    DTIC Science & Technology

    1976-08-01

    in drawing the curves, thanks to computer graphics. A few people ha’ very imaginatively pro- posed - td developed new ways of visualizing the data...k= -= it turns out that , , 0ki < 0 is a sufficient condition for all our k= -cc ( operations to be valid. Ii_ _ _ _ _ _ __ __ _ _ _ _ -106- We will

  5. Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses

    NASA Astrophysics Data System (ADS)

    Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.

    2017-12-01

    To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number hp160221).

  6. Fluctuating ideal-gas lattice Boltzmann method with fluctuation dissipation theorem for nonvanishing velocities.

    PubMed

    Kaehler, G; Wagner, A J

    2013-06-01

    Current implementations of fluctuating ideal-gas descriptions with the lattice Boltzmann methods are based on a fluctuation dissipation theorem, which, while greatly simplifying the implementation, strictly holds only for zero mean velocity and small fluctuations. We show how to derive the fluctuation dissipation theorem for all k, which was done only for k=0 in previous derivations. The consistent derivation requires, in principle, locally velocity-dependent multirelaxation time transforms. Such an implementation is computationally prohibitively expensive but, with a small computational trick, it is feasible to reproduce the correct FDT without overhead in computation time. It is then shown that the previous standard implementations perform poorly for non vanishing mean velocity as indicated by violations of Galilean invariance of measured structure factors. Results obtained with the method introduced here show a significant reduction of the Galilean invariance violations.

  7. Method of locating related items in a geometric space for data mining

    DOEpatents

    Hendrickson, B.A.

    1999-07-27

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity. 12 figs.

  8. Method of locating related items in a geometric space for data mining

    DOEpatents

    Hendrickson, Bruce A.

    1999-01-01

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity.

  9. Hodge numbers for all CICY quotients

    NASA Astrophysics Data System (ADS)

    Constantin, Andrei; Gray, James; Lukas, Andre

    2017-01-01

    We present a general method for computing Hodge numbers for Calabi-Yau manifolds realised as discrete quotients of complete intersections in products of projective spaces. The method relies on the computation of equivariant cohomologies and is illustrated for several explicit examples. In this way, we compute the Hodge numbers for all discrete quotients obtained in Braun's classification [1].

  10. Pen-based computers: Computers without keys

    NASA Technical Reports Server (NTRS)

    Conklin, Cheryl L.

    1994-01-01

    The National Space Transportation System (NSTS) is comprised of many diverse and highly complex systems incorporating the latest technologies. Data collection associated with ground processing of the various Space Shuttle system elements is extremely challenging due to the many separate processing locations where data is generated. This presents a significant problem when the timely collection, transfer, collation, and storage of data is required. This paper describes how new technology, referred to as Pen-Based computers, is being used to transform the data collection process at Kennedy Space Center (KSC). Pen-Based computers have streamlined procedures, increased data accuracy, and now provide more complete information than previous methods. The end results is the elimination of Shuttle processing delays associated with data deficiencies.

  11. Application of Adjoint Method and Spectral-Element Method to Tomographic Inversion of Regional Seismological Structure Beneath Japanese Islands

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.

    2014-12-01

    Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.

  12. The role of atomic lines in radiation heating of the experimental space vehicle Fire-II

    NASA Astrophysics Data System (ADS)

    Surzhikov, S. T.

    2015-10-01

    The results of calculating the convective and radiation heating of the Fire-II experimental space vehicle allowing for atomic lines of atoms and ions using the NERAT-ASTEROID computer platform are presented. This computer platform is intended to solve the complete set of equations of radiation gas dynamics of viscous, heat-conductive, and physically and chemically nonequilibrium gas, as well as radiation transfer. The spectral optical properties of high temperature gases are calculated using ab initio quasi-classical and quantum-mechanical methods. The calculation of the transfer of selective thermal radiation is performed using a line-by-line method using specially generated computational grids over the radiation wavelengths, which make it possible to attain a noticeable economy of computational resources.

  13. University of Tennessee Center for Space Transportation and Applied Research (CSTAR)

    NASA Astrophysics Data System (ADS)

    1995-10-01

    The Center for Space Transportation and Applied Research had projects with space applications in six major areas: laser materials processing, artificial intelligence/expert systems, space transportation, computational methods, chemical propulsion, and electric propulsion. The closeout status of all these projects is addressed.

  14. University of Tennessee Center for Space Transportation and Applied Research (CSTAR)

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Center for Space Transportation and Applied Research had projects with space applications in six major areas: laser materials processing, artificial intelligence/expert systems, space transportation, computational methods, chemical propulsion, and electric propulsion. The closeout status of all these projects is addressed.

  15. NLO cross sections in 4 dimensions without DREG

    NASA Astrophysics Data System (ADS)

    Hernández-Pinto, R. J.; Driencourt-Mangin, F.; Rodrigo, G.; Sborlini, G. F. R.

    2016-10-01

    In this review, we present a new method for computing physical cross sections at NLO accuracy in QCD without using the standard Dimensional Regularisation. The algorithm is based on the Loop-Tree Duality theorem, which allow us to obtain loop integrals as a sum of phase-space integrals; in this way, transforming loop integrals into phase-space integrals, we propose a method to merge virtual and real contributions in order to find observables at NLO in d = 4 space-time dimensions. In addition, the strategy described is used for computing the γ* → qq̅(g) process. A more detailed discussion related on this topic can be found in Ref [1].

  16. Controlled Ecological Life Support System: Regenerative Life Support Systems in Space

    NASA Technical Reports Server (NTRS)

    Macelroy, Robert D.; Smernoff, David T.

    1987-01-01

    A wide range of topics related to the extended support of humans in space are covered. Overviews of research conducted in Japan, Europe, and the U.S. are presented. The methods and technologies required to recycle materials, especially respiratory gases, within a closed system are examined. Also presented are issues related to plant and algal productivity, efficiency, and processing methods. Computer simulation of closed systems, discussions of radiation effects on systems stability, and modeling of a computer bioregenerative system are also covered.

  17. En Route Spacing System and Method

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz (Inventor); Green, Steven M. (Inventor)

    2002-01-01

    A method of and computer software for minimizing aircraft deviations needed to comply with an en route miles-in-trail spacing requirement imposed during air traffic control operations via establishing a spacing reference geometry, predicting spatial locations of a plurality of aircraft at a predicted time of intersection of a path of a first of said plurality of aircraft with the spacing reference geometry, and determining spacing of each of the plurality of aircraft based on the predicted spatial locations.

  18. En route spacing system and method

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz (Inventor); Green, Steven M. (Inventor)

    2002-01-01

    A method of and computer software for minimizing aircraft deviations needed to comply with an en route miles-in-trail spacing requirement imposed during air traffic control operations via establishing a spacing reference geometry, predicting spatial locations of a plurality of aircraft at a predicted time of intersection of a path of a first of said plurality of aircraft with the spacing reference geometry, and determining spacing of each of the plurality of aircraft based on the predicted spatial locations.

  19. Computer aided flexible envelope designs

    NASA Technical Reports Server (NTRS)

    Resch, R. D.

    1975-01-01

    Computer aided design methods are presented for the design and construction of strong, lightweight structures which require complex and precise geometric definition. The first, flexible structures, is a unique system of modeling folded plate structures and space frames. It is possible to continuously vary the geometry of a space frame to produce large, clear spans with curvature. The second method deals with developable surfaces, where both folding and bending are explored with the observed constraint of available building materials, and what minimal distortion result in maximum design capability. Alternative inexpensive fabrication techniques are being developed to achieve computer defined enclosures which are extremely lightweight and mathematically highly precise.

  20. Rapid Transient Pressure Field Computations in the Nearfield of Circular Transducers using Frequency Domain Time-Space Decomposition

    PubMed Central

    Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.

    2013-01-01

    The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476

  1. Computational Aspects of Heat Transfer in Structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M. (Compiler)

    1982-01-01

    Techniques for the computation of heat transfer and associated phenomena in complex structures are examined with an emphasis on reentry flight vehicle structures. Analysis methods, computer programs, thermal analysis of large space structures and high speed vehicles, and the impact of computer systems are addressed.

  2. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  3. Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.

  4. Evaluation of the Intel Xeon Phi 7120 and NVIDIA K80 as accelerators for two-dimensional panel codes

    PubMed Central

    2017-01-01

    To optimize the geometry of airfoils for a specific application is an important engineering problem. In this context genetic algorithms have enjoyed some success as they are able to explore the search space without getting stuck in local optima. However, these algorithms require the computation of aerodynamic properties for a significant number of airfoil geometries. Consequently, for low-speed aerodynamics, panel methods are most often used as the inner solver. In this paper we evaluate the performance of such an optimization algorithm on modern accelerators (more specifically, the Intel Xeon Phi 7120 and the NVIDIA K80). For that purpose, we have implemented an optimized version of the algorithm on the CPU and Xeon Phi (based on OpenMP, vectorization, and the Intel MKL library) and on the GPU (based on CUDA and the MAGMA library). We present timing results for all codes and discuss the similarities and differences between the three implementations. Overall, we observe a speedup of approximately 2.5 for adding an Intel Xeon Phi 7120 to a dual socket workstation and a speedup between 3.4 and 3.8 for adding a NVIDIA K80 to a dual socket workstation. PMID:28582389

  5. Evaluation of the Intel Xeon Phi 7120 and NVIDIA K80 as accelerators for two-dimensional panel codes.

    PubMed

    Einkemmer, Lukas

    2017-01-01

    To optimize the geometry of airfoils for a specific application is an important engineering problem. In this context genetic algorithms have enjoyed some success as they are able to explore the search space without getting stuck in local optima. However, these algorithms require the computation of aerodynamic properties for a significant number of airfoil geometries. Consequently, for low-speed aerodynamics, panel methods are most often used as the inner solver. In this paper we evaluate the performance of such an optimization algorithm on modern accelerators (more specifically, the Intel Xeon Phi 7120 and the NVIDIA K80). For that purpose, we have implemented an optimized version of the algorithm on the CPU and Xeon Phi (based on OpenMP, vectorization, and the Intel MKL library) and on the GPU (based on CUDA and the MAGMA library). We present timing results for all codes and discuss the similarities and differences between the three implementations. Overall, we observe a speedup of approximately 2.5 for adding an Intel Xeon Phi 7120 to a dual socket workstation and a speedup between 3.4 and 3.8 for adding a NVIDIA K80 to a dual socket workstation.

  6. Elastic-wave-mode separation in TTI media with inverse-distance weighted interpolation involving position shading

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu

    2017-10-01

    The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.

  7. Three dimensional magnetic fields in extra high speed modified Lundell alternators computed by a combined vector-scalar magnetic potential finite element method

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Wang, R.; Secunde, R.

    1992-01-01

    A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.

  8. On the validity of the arithmetic-geometric mean method to locate the optimal solution in a supply chain system

    NASA Astrophysics Data System (ADS)

    Chung, Kun-Jen

    2012-08-01

    Cardenas-Barron [Cardenas-Barron, L.E. (2010) 'A Simple Method to Compute Economic order Quantities: Some Observations', Applied Mathematical Modelling, 34, 1684-1688] indicates that there are several functions in which the arithmetic-geometric mean method (AGM) does not give the minimum. This article presents another situation to reveal that the AGM inequality to locate the optimal solution may be invalid for Teng, Chen, and Goyal [Teng, J.T., Chen, J., and Goyal S.K. (2009), 'A Comprehensive Note on: An Inventory Model under Two Levels of Trade Credit and Limited Storage Space Derived without Derivatives', Applied Mathematical Modelling, 33, 4388-4396], Teng and Goyal [Teng, J.T., and Goyal S.K. (2009), 'Comment on 'Optimal Inventory Replenishment Policy for the EPQ Model under Trade Credit Derived without Derivatives', International Journal of Systems Science, 40, 1095-1098] and Hsieh, Chang, Weng, and Dye [Hsieh, T.P., Chang, H.J., Weng, M.W., and Dye, C.Y. (2008), 'A Simple Approach to an Integrated Single-vendor Single-buyer Inventory System with Shortage', Production Planning and Control, 19, 601-604]. So, the main purpose of this article is to adopt the calculus approach not only to overcome shortcomings of the arithmetic-geometric mean method of Teng et al. (2009), Teng and Goyal (2009) and Hsieh et al. (2008), but also to develop the complete solution procedures for them.

  9. An automated method for tracking clouds in planetary atmospheres

    NASA Astrophysics Data System (ADS)

    Luz, D.; Berry, D. L.; Roos-Serote, M.

    2008-05-01

    We present an automated method for cloud tracking which can be applied to planetary images. The method is based on a digital correlator which compares two or more consecutive images and identifies patterns by maximizing correlations between image blocks. This approach bypasses the problem of feature detection. Four variations of the algorithm are tested on real cloud images of Jupiter's white ovals from the Galileo mission, previously analyzed in Vasavada et al. [Vasavada, A.R., Ingersoll, A.P., Banfield, D., Bell, M., Gierasch, P.J., Belton, M.J.S., Orton, G.S., Klaasen, K.P., Dejong, E., Breneman, H.H., Jones, T.J., Kaufman, J.M., Magee, K.P., Senske, D.A. 1998. Galileo imaging of Jupiter's atmosphere: the great red spot, equatorial region, and white ovals. Icarus, 135, 265, doi:10.1006/icar.1998.5984]. Direct correlation, using the sum of squared differences between image radiances as a distance estimator (baseline case), yields displacement vectors very similar to this previous analysis. Combining this distance estimator with the method of order ranks results in a technique which is more robust in the presence of outliers and noise and of better quality. Finally, we introduce a distance metric which, combined with order ranks, provides results of similar quality to the baseline case and is faster. The new approach can be applied to data from a number of space-based imaging instruments with a non-negligible gain in computing time.

  10. Soft computing methods in design of superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1995-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modeled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  11. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  12. An empirical method for computing leeside centerline heating on the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Helms, V. T., III

    1981-01-01

    An empirical method is presented for computing top centerline heating on the Space Shuttle Orbiter at simulated reentry conditions. It is shown that the Shuttle's top centerline can be thought of as being under the influence of a swept cylinder flow field. The effective geometry of the flow field, as well as top centerline heating, are directly related to oil-flow patterns on the upper surface of the fuselage. An empirical turbulent swept cylinder heating method was developed based on these considerations. The method takes into account the effects of the vortex-dominated leeside flow field without actually having to compute the detailed properties of such a complex flow. The heating method closely predicts experimental heat-transfer values on the top centerline of a Shuttle model at Mach numbers of 6 and 10 over a wide range in Reynolds number and angle of attack.

  13. Probabilistic Structural Analysis Theory Development

    NASA Technical Reports Server (NTRS)

    Burnside, O. H.

    1985-01-01

    The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.

  14. Modeling of space environment impact on nanostructured materials. General principles

    NASA Astrophysics Data System (ADS)

    Voronina, Ekaterina; Novikov, Lev

    2016-07-01

    In accordance with the resolution of ISO TC20/SC14 WG4/WG6 joint meeting, Technical Specification (TS) 'Modeling of space environment impact on nanostructured materials. General principles' which describes computer simulation methods of space environment impact on nanostructured materials is being prepared. Nanomaterials surpass traditional materials for space applications in many aspects due to their unique properties associated with nanoscale size of their constituents. This superiority in mechanical, thermal, electrical and optical properties will evidently inspire a wide range of applications in the next generation spacecraft intended for the long-term (~15-20 years) operation in near-Earth orbits and the automatic and manned interplanetary missions. Currently, ISO activity on developing standards concerning different issues of nanomaterials manufacturing and applications is high enough. Most such standards are related to production and characterization of nanostructures, however there is no ISO documents concerning nanomaterials behavior in different environmental conditions, including the space environment. The given TS deals with the peculiarities of the space environment impact on nanostructured materials (i.e. materials with structured objects which size in at least one dimension lies within 1-100 nm). The basic purpose of the document is the general description of the methodology of applying computer simulation methods which relate to different space and time scale to modeling processes occurring in nanostructured materials under the space environment impact. This document will emphasize the necessity of applying multiscale simulation approach and present the recommendations for the choice of the most appropriate methods (or a group of methods) for computer modeling of various processes that can occur in nanostructured materials under the influence of different space environment components. In addition, TS includes the description of possible approximations and limitations of proposed simulation methods as well as of widely used software codes. This TS may be used as a base for developing a new standard devoted to nanomaterials applications for spacecraft.

  15. Performance Benchmark for a Prismatic Flow Solver

    DTIC Science & Technology

    2007-03-26

    Gauss- Seidel (LU-SGS) implicit method is used for time integration to reduce the computational time. A one-equation turbulence model by Goldberg and...numerical flux computations. The Lower-Upper-Symmetric Gauss- Seidel (LU-SGS) implicit method [1] is used for time integration to reduce the...Sharov, D. and Nakahashi, K., “Reordering of Hybrid Unstructured Grids for Lower-Upper Symmetric Gauss- Seidel Computations,” AIAA Journal, Vol. 36

  16. Computing observables in curved multifield models of inflation—A guide (with code) to the transport method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dias, Mafalda; Seery, David; Frazer, Jonathan, E-mail: m.dias@sussex.ac.uk, E-mail: j.frazer@sussex.ac.uk, E-mail: a.liddle@sussex.ac.uk

    We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.

  17. Leveraging EAP-Sparsity for Compressed Sensing of MS-HARDI in (k, q)-Space.

    PubMed

    Sun, Jiaqi; Sakhaee, Elham; Entezari, Alireza; Vemuri, Baba C

    2015-01-01

    Compressed Sensing (CS) for the acceleration of MR scans has been widely investigated in the past decade. Lately, considerable progress has been made in achieving similar speed ups in acquiring multi-shell high angular resolution diffusion imaging (MS-HARDI) scans. Existing approaches in this context were primarily concerned with sparse reconstruction of the diffusion MR signal S(q) in the q-space. More recently, methods have been developed to apply the compressed sensing framework to the 6-dimensional joint (k, q)-space, thereby exploiting the redundancy in this 6D space. To guarantee accurate reconstruction from partial MS-HARDI data, the key ingredients of compressed sensing that need to be brought together are: (1) the function to be reconstructed needs to have a sparse representation, and (2) the data for reconstruction ought to be acquired in the dual domain (i.e., incoherent sensing) and (3) the reconstruction process involves a (convex) optimization. In this paper, we present a novel approach that uses partial Fourier sensing in the 6D space of (k, q) for the reconstruction of P(x, r). The distinct feature of our approach is a sparsity model that leverages surfacelets in conjunction with total variation for the joint sparse representation of P(x, r). Thus, our method stands to benefit from the practical guarantees for accurate reconstruction from partial (k, q)-space data. Further, we demonstrate significant savings in acquisition time over diffusion spectral imaging (DSI) which is commonly used as the benchmark for comparisons in reported literature. To demonstrate the benefits of this approach,.we present several synthetic and real data examples.

  18. Regularization method for large eddy simulations of shock-turbulence interactions

    NASA Astrophysics Data System (ADS)

    Braun, N. O.; Pullin, D. I.; Meiron, D. I.

    2018-05-01

    The rapid change in scales over a shock has the potential to introduce unique difficulties in Large Eddy Simulations (LES) of compressible shock-turbulence flows if the governing model does not sufficiently capture the spectral distribution of energy in the upstream turbulence. A method for the regularization of LES of shock-turbulence interactions is presented which is constructed to enforce that the energy content in the highest resolved wavenumbers decays as k - 5 / 3, and is computed locally in physical-space at low computational cost. The application of the regularization to an existing subgrid scale model is shown to remove high wavenumber errors while maintaining agreement with Direct Numerical Simulations (DNS) of forced and decaying isotropic turbulence. Linear interaction analysis is implemented to model the interaction of a shock with isotropic turbulence from LES. Comparisons to analytical models suggest that the regularization significantly improves the ability of the LES to predict amplifications in subgrid terms over the modeled shockwave. LES and DNS of decaying, modeled post shock turbulence are also considered, and inclusion of the regularization in shock-turbulence LES is shown to improve agreement with lower Reynolds number DNS.

  19. Scale Space for Camera Invariant Features.

    PubMed

    Puig, Luis; Guerrero, José J; Daniilidis, Kostas

    2014-09-01

    In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.

  20. A spectral approach for discrete dislocation dynamics simulations of nanoindentation

    NASA Astrophysics Data System (ADS)

    Bertin, Nicolas; Glavas, Vedran; Datta, Dibakar; Cai, Wei

    2018-07-01

    We present a spectral approach to perform nanoindentation simulations using three-dimensional nodal discrete dislocation dynamics. The method relies on a two step approach. First, the contact problem between an indenter of arbitrary shape and an isotropic elastic half-space is solved using a spectral iterative algorithm, and the contact pressure is fully determined on the half-space surface. The contact pressure is then used as a boundary condition of the spectral solver to determine the resulting stress field produced in the simulation volume. In both stages, the mechanical fields are decomposed into Fourier modes and are efficiently computed using fast Fourier transforms. To further improve the computational efficiency, the method is coupled with a subcycling integrator and a special approach is devised to approximate the displacement field associated with surface steps. As a benchmark, the method is used to compute the response of an elastic half-space using different types of indenter. An example of a dislocation dynamics nanoindentation simulation with complex initial microstructure is presented.

  1. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  2. An atomic orbital based real-time time-dependent density functional theory for computing electronic circular dichroism band spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goings, Joshua J.; Li, Xiaosong, E-mail: xsli@uw.edu

    2016-06-21

    One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entiremore » ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.« less

  3. The Development of New Atmospheric Models for K and M DwarfStars with Exoplanets

    NASA Astrophysics Data System (ADS)

    Linsky, Jeffrey L.

    2018-01-01

    The ultraviolet and X-ray emissions of host stars play critical roles in the survival and chemical composition of the atmospheres of their exoplanets. The need to measure and understand this radiative output, in particular for K and M dwarfs, is the main rationale for computing a new generation of stellar models that includes magnetically heated chromospheres and coronae in addition to their photospheres. We describe our method for computing semi-empirical models that includes solutions of the statistical equilibrium equations for 52 atoms and ions and of the non-LTE radiative transfer equations for all important spectral lines. The code is an offspring of the Solar Radiation Physical Modelling system (SRPM) developed by Fontenla et al. (2007--2015) to compute one-dimensional models in hydrostatic equilibrium to fit high-resolution stellar X-ray to IR spectra. Also included are 20 diatomic molecules and their more than 2 million spectral lines. Our-proof-of-concept model is for the M1.5 V star GJ 832 (Fontenla et al. ApJ 830, 154 (2016)). We will fit the line fluxes and profiles of X-ray lines and continua observed by Chandra and XMM-Newton, UV lines observed by the COS and STIS instruments on HST (N V, C IV, Si IV, Si III, Mg II, C II, and O I), optical lines (including H$\\alpha$, Ca II, Na I), and continua. These models will allow us to compute extreme-UV spectra, which are unobservable but required to predict the hydrodynamic mass-loss rate from exoplanet atmospheres, and to predict panchromatic spectra of new exoplanet host stars discovered after the end of the HST mission.This work is supported by grant HST-GO-15038 from the Space Telescope Science Institute to the Univ. of Colorado

  4. Preface: phys. stat. sol. (b) 243/5

    NASA Astrophysics Data System (ADS)

    Artacho, Emilio; Beck, Thomas L.; Hernández, Eduardo

    Between 20 and 24 June 2005 the Centre Européen de Calcul Atomique et Moléculaire - or CECAM, as it is more widely known - hosted a workshop entitled State-of-the-art, developments and perspectives of real-space electronic structure methods in condensed-matter and chemical physics, organized with the support of CECAM itself and the ?k network. The workshop was attended by some forty participants coming from fifteen countries, and about thirty presentations were given. The workshop provided a lively forum for the discussion of recent methodological developments in electronic structure calculations, ranging from linear-scaling methods, mesh techniques, time-dependent density functional methods, and a long etcetera, which had been our ultimate objective when undertaking its organization.The first-principles simulation of solids, liquids and complex matter in general has jumped in the last few years from the relatively confined niches in condensed matter and materials physics and in quantum chemistry, to cover most of the sciences, including nano, bio, geo, environmental sciences and engineering. This effect has been propitiated by the ability of simulation techniques to deal with an ever larger degree of complexity. Although this is partially to be attributed to the steady increase in computer power, the main factor behind this change has been the coming of age of the main theoretical framework for most of the simulations performed today, together with an extremely active development of the basic algorithms for its computer implementation. It is this latter aspect that is the topic of this special issue of physica status solidi.There is a relentless effort in the scientific community seeking to achieve not only higher accuracy, but also more efficient, cost-effective and if possible simpler computational methods in electronic structure calculations [1]. From the early 1990s onwards there has been a keen interest in the computational condensed matter and chemical physics communities in methods that had the potential to overcome the unfavourable scaling of the computational cost with the system size, implicit in the momentum-space formalism familiar to solid-state physicists and the quantum chemistry approaches more common in chemical physics and physical chemistry. This interest was sparkled by the famous paper in which Weitao Yang [2] introduced the Divide and Conquer method. Soon afterwards several practical schemes aiming to achieve linear-scaling calculations, by exploiting what Walter Kohn called most aptly the near-sightedness of quantum mechanics [3], were proposed and explored (for a review on linear-scaling methods, see [4]). This search for novel, more efficient and better scaling algorithms proved to be fruitful in more than one way. Not only was it the start of several packages which are well-known today (such as Siesta, Conquest, etc.), but it also leads to new ways of representing electronic states and orbitals, such as grids [5, 6], wavelets [7], finite elements, etc. Also, the drive to exploit near-sightedness attracted computational solid state physicists to the type of atomic-like basis functions traditionally used in the quantum chemistry community. At the same time computational chemists learnt about plane waves and density functional theory, and thus a fruitful dialogue was started between two communities that hitherto had not had much contact.Another interesting development that has begun to take place over the last decade or so is the convergence of several branches of science, notably physics, chemistry and biology, at the nanoscale. Experimentalists in all these different fields are now performing highly sophisticated measurements on systems of nanometer size, the kind of systems that us theoreticians can address with our computational methods, and this convergence of experiment and theory at this scale has also been very fruitful, particularly in the fields of electronic transport and STM image simulation. It is now quite common to find papers at the cutting edge of nanoscience and nanotechnology co-authored by experimentalists and theorists, and it can only be expected that this fruitful interplay between theory and experiment will increase in the future.It was considerations such as these that moved us to propose to CECAM and ?k the celebration of a workshop devoted to the discussion of recent developments in electronic structure techniques, a proposal that was enthusiastically received, not just by CECAM and ?k, but also by our invited speakers and participants. Interest in novel electronic structure methods is now as high as ever, and we are therefore very happy that physica status solidi has given us the opportunity to devote a special issue to the topics covered in the workshop. This special issue of physica status solidi gathers invited contributions from several attendants to the workshop, contributions that are representative of the range of topics and issues discussed then, including progress in linear scaling methods, electronic transport, simulation of STM images, time-dependent DFT methods, etc. It rests for us to thank all the contributors to this special issue for their efforts, CECAM and ?k for funding the workshop, physica status solidi for agreeing to devote this special issue to the workshop, and last but not least Emmanuelle and Emilie, the CECAM secretaries, for their invaluable practical help in putting this workshop together

  5. Using learned under-sampling pattern for increasing speed of cardiac cine MRI based on compressive sensing principles

    NASA Astrophysics Data System (ADS)

    Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid

    2012-12-01

    This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.

  6. X-band preamplifier filter

    NASA Technical Reports Server (NTRS)

    Manshadi, F.

    1986-01-01

    A low-loss bandstop filter designed and developed for the Deep Space Network's 34-meter high-efficiency antennas is described. The filter is used for protection of the X-band traveling wave masers from the 20-kW transmitter signal. A combination of empirical and theoretical techniques was employed as well as computer simulation to verify the design before fabrication.

  7. Various views of STS-95 Senator John Glenn during training

    NASA Image and Video Library

    1998-06-18

    S98-08742 (May 1998) --- Two mission specialists assigned to the STS-95 flight rehearse some of their duties for the scheduled late October launch of the Space Shuttle Discovery. Stephen K. Robinson inputs data on the laptop computer while Scott E. Parazynski looks on. The photo was taken by Joe McNally, National Geographic, for NASA.

  8. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  9. Space-time least-squares finite element method for convection-reaction system with transformed variables

    PubMed Central

    Nam, Jaewook

    2011-01-01

    We present a method to solve a convection-reaction system based on a least-squares finite element method (LSFEM). For steady-state computations, issues related to recirculation flow are stated and demonstrated with a simple example. The method can compute concentration profiles in open flow even when the generation term is small. This is the case for estimating hemolysis in blood. Time-dependent flows are computed with the space-time LSFEM discretization. We observe that the computed hemoglobin concentration can become negative in certain regions of the flow; it is a physically unacceptable result. To prevent this, we propose a quadratic transformation of variables. The transformed governing equation can be solved in a straightforward way by LSFEM with no sign of unphysical behavior. The effect of localized high shear on blood damage is shown in a circular Couette-flow-with-blade configuration, and a physiological condition is tested in an arterial graft flow. PMID:21709752

  10. Heating Augmentation for Short Hypersonic Protuberances

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza R.; Wood, William A.

    2008-01-01

    Computational aeroheating analyses of the Space Shuttle Orbiter plug repair models are validated against data collected in the Calspan University of Buffalo Research Center (CUBRC) 48 inch shock tunnel. The comparison shows that the average difference between computed heat transfer results and the data is about 9:5%. Using CFD and Wind Tunnel (WT) data, an empirical correlation for estimating heating augmentation on short hyper- sonic protuberances (k/delta < 0.33) is proposed. This proposed correlation is compared with several computed flight simulation cases and good agreement is achieved. Accordingly, this correlation is proposed for further investigation on other short hypersonic protuberances for estimating heating augmentation.

  11. Heating Augmentation for Short Hypersonic Protuberances

    NASA Technical Reports Server (NTRS)

    Mazaheri, Ali R.; Wood, William A.

    2008-01-01

    Computational aeroheating analyses of the Space Shuttle Orbiter plug repair models are validated against data collected in the Calspan University of Buffalo Research Center (CUBRC) 48 inch shock tunnel. The comparison shows that the average difference between computed heat transfer results and the data is about 9.5%. Using CFD and Wind Tunnel (WT) data, an empirical correlation for estimating heating augmentation on short hypersonic protuberances (k/delta less than 0.3) is proposed. This proposed correlation is compared with several computed flight simulation cases and good agreement is achieved. Accordingly, this correlation is proposed for further investigation on other short hypersonic protuberances for estimating heating augmentation.

  12. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    DOE PAGES

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; ...

    2015-01-26

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pK a values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pK a values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, wemore » used spectroscopic titrations to measure the pK a values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pK a values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pK a units and 0.35 pK a units, respectively, and a root mean square deviation of 0.46 pK a units and 0.45 pK a units, respectively. Finally, we employ our two best methods to predict the pK a values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.« less

  13. The Diffusion of Computer-Based Technology in K-12 Schools: Teachers' Perspectives

    ERIC Educational Resources Information Center

    Colandrea, John Louis

    2012-01-01

    Because computer technology represents a major financial outlay for school districts and is an efficient method of preparing and delivering lessons, studying the process of teacher adoption of computer use is beneficial and adds to the current body of knowledge. Because the teacher is the ultimate user of computer technology for lesson preparation…

  14. Stabilization and discontinuity-capturing parameters for space-time flow computations with finite element and isogeometric discretizations

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Otoguro, Yuto

    2018-04-01

    Stabilized methods, which have been very common in flow computations for many years, typically involve stabilization parameters, and discontinuity-capturing (DC) parameters if the method is supplemented with a DC term. Various well-performing stabilization and DC parameters have been introduced for stabilized space-time (ST) computational methods in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible and compressible flows. These parameters were all originally intended for finite element discretization but quite often used also for isogeometric discretization. The stabilization and DC parameters we present here for ST computations are in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible flows, target isogeometric discretization, and are also applicable to finite element discretization. The parameters are based on a direction-dependent element length expression. The expression is outcome of an easy to understand derivation. The key components of the derivation are mapping the direction vector from the physical ST element to the parent ST element, accounting for the discretization spacing along each of the parametric coordinates, and mapping what we have in the parent element back to the physical element. The test computations we present for pure-advection cases show that the parameters proposed result in good solution profiles.

  15. [Optimal scan parameters for a method of k-space trajectory (radial scan method) in evaluation of carotid plaque characteristics].

    PubMed

    Nakamura, Manami; Makabe, Takeshi; Tezuka, Hideomi; Miura, Takahiro; Umemura, Takuma; Sugimori, Hiroyuki; Sakata, Motomichi

    2013-04-01

    The purpose of this study was to optimize scan parameters for evaluation of carotid plaque characteristics by k-space trajectory (radial scan method), using a custom-made carotid plaque phantom. The phantom was composed of simulated sternocleidomastoid muscle and four types of carotid plaque. The effect of chemical shift artifact was compared using T1 weighted images (T1WI) of the phantom obtained with and without fat suppression, and using two types of k-space trajectory (the radial scan method and the Cartesian method). The ratio of signal intensity of simulated sternocleidomastoid muscle to the signal intensity of hematoma, blood (including heparin), lard, and mayonnaise was compared among various repetition times (TR) using T1WI and T2 weighted imaging (T2WI). In terms of chemical shift artifacts, image quality was improved using fat suppression for both the radial scan and Cartesian methods. In terms of signal ratio, the highest values were obtained for the radial scan method with TR of 500 ms for T1WI, and TR of 3000 ms for T2WI. For evaluation of carotid plaque characteristics using the radial scan method, chemical shift artifacts were reduced with fat suppression. Signal ratio was improved by optimizing the TR settings for T1WI and T2WI. These results suggest the potential for using magnetic resonance imaging for detailed evaluation of carotid plaque.

  16. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    DTIC Science & Technology

    2015-09-13

    prescribed fully or partially, is from an actual locust, extracted from high-speed, multi-camera video recordings of the locust in a wind tunnel . We use...With creative methods for coupling the fluid and structure, we can increase the scope and efficiency of the FSI modeling . Multiscale methods, which now...play an important role in computational mathematics, can also increase the accuracy and efficiency of the computer modeling techniques. The main

  17. On the computation and updating of the modified Cholesky decomposition of a covariance matrix

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Methods for obtaining and updating the modified Cholesky decomposition (MCD) for the particular case of a covariance matrix when one is given only the original data are described. These methods are the standard method of forming the covariance matrix K then solving for the MCD, L and D (where K=LDLT); a method based on Householder reflections; and lastly, a method employing the composite-t algorithm. For many cases in the analysis of remotely sensed data, the composite-t method is the superior method despite the fact that it is the slowest one, since (1) the relative amount of time computing MCD's is often quite small, (2) the stability properties of it are the best of the three, and (3) it affords an efficient and numerically stable procedure for updating the MCD. The properties of these methods are discussed and FORTRAN programs implementing these algorithms are listed.

  18. Fast localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates.

    PubMed

    Subotnik, Joseph E; Dutoi, Anthony D; Head-Gordon, Martin

    2005-09-15

    We present here an algorithm for computing stable, well-defined localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates. The algorithm is very fast, limited only by diagonalization of two matrices with dimension the size of the number of virtual orbitals. Furthermore, we require no more than quadratic (in the number of electrons) storage. The basic premise behind our algorithm is that one can decompose any given atomic-orbital (AO) vector space as a minimal basis space (which includes the occupied and valence virtual spaces) and a hard-virtual (HV) space (which includes everything else). The valence virtual space localizes easily with standard methods, while the hard-virtual space is constructed to be atom centered and automatically local. The orbitals presented here may be computed almost as quickly as projecting the AO basis onto the virtual space and are almost as local (according to orbital variance), while our orbitals are orthonormal (rather than redundant and nonorthogonal). We expect this algorithm to find use in local-correlation methods.

  19. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.

    2011-07-01

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.

  20. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.

  1. The two-dimensional tunnel structures of K3Sb5O14 and K2Sb4O11

    NASA Technical Reports Server (NTRS)

    Hong, H. Y.-P.

    1974-01-01

    The structures of K3Sb5O14 and K2Sb4O11 have been solved by the single-crystal X-ray direct method and the heavy-atom method, respectively. The structure of K3Sb5O14 is orthorhombic, with space group Pbam and cell parameters a = 24.247 (4), b = 7.157 (2), c = 7.334 (2) A, Z = 4. The structure of K2Sb4O11 is monoclinic, with space group C2/m and cell parameters a = 19.473 (4), b = 7.542 (1), c = 7.198 (1) A, beta = 94.82 (2) deg, Z = 4. A full-matrix least-squares refinement gave R = 0.072 and R = 0.067, respectively. In both structures, oxygen atoms form an octahedron around each Sb atom and an irregular polyhedron around each K atom. By sharing corners and edges, the octahedra form a skeleton network having intersecting b-axis and c-axis tunnels. The K(+) ions, which have more than ten oxygen near neighbors, are located in these tunnels. Evidence for K(+)-ion transport within and between tunnels comes from ion exchange of the alkali ions in molten salts and anisotropic temperature factors that are anomalously large in the direction of the tunnels.

  2. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  3. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  4. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE PAGES

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...

    2018-04-30

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  5. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  6. Osmotic forces and gap junctions in spreading depression: a computational model

    NASA Technical Reports Server (NTRS)

    Shapiro, B. E.

    2001-01-01

    In a computational model of spreading depression (SD), ionic movement through a neuronal syncytium of cells connected by gap junctions is described electrodiffusively. Simulations predict that SD will not occur unless cells are allowed to expand in response to osmotic pressure gradients and K+ is allowed to move through gap junctions. SD waves of [K+]out approximately 25 to approximately 60 mM moving at approximately 2 to approximately 18 mm/min are predicted over the range of parametric values reported in gray matter, with extracellular space decreasing up to approximately 50%. Predicted waveform shape is qualitatively similar to laboratory reports. The delayed-rectifier, NMDA, BK, and Na+ currents are predicted to facilitate SD, while SK and A-type K+ currents and glial activity impede SD. These predictions are consonant with recent findings that gap junction poisons block SD and support the theories that cytosolic diffusion via gap junctions and osmotic forces are important mechanisms underlying SD.

  7. A Bright Future for Evolutionary Methods in Drug Design.

    PubMed

    Le, Tu C; Winkler, David A

    2015-08-01

    Most medicinal chemists understand that chemical space is extremely large, essentially infinite. Although high-throughput experimental methods allow exploration of drug-like space more rapidly, they are still insufficient to fully exploit the opportunities that such large chemical space offers. Evolutionary methods can synergistically blend automated synthesis and characterization methods with computational design to identify promising regions of chemical space more efficiently. We describe how evolutionary methods are implemented, and provide examples of published drug development research in which these methods have generated molecules with increased efficacy. We anticipate that evolutionary methods will play an important role in future drug discovery. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Systematic exploration of unsupervised methods for mapping behavior

    NASA Astrophysics Data System (ADS)

    Todd, Jeremy G.; Kain, Jamey S.; de Bivort, Benjamin L.

    2017-02-01

    To fully understand the mechanisms giving rise to behavior, we need to be able to precisely measure it. When coupled with large behavioral data sets, unsupervised clustering methods offer the potential of unbiased mapping of behavioral spaces. However, unsupervised techniques to map behavioral spaces are in their infancy, and there have been few systematic considerations of all the methodological options. We compared the performance of seven distinct mapping methods in clustering a wavelet-transformed data set consisting of the x- and y-positions of the six legs of individual flies. Legs were automatically tracked by small pieces of fluorescent dye, while the fly was tethered and walking on an air-suspended ball. We find that there is considerable variation in the performance of these mapping methods, and that better performance is attained when clustering is done in higher dimensional spaces (which are otherwise less preferable because they are hard to visualize). High dimensionality means that some algorithms, including the non-parametric watershed cluster assignment algorithm, cannot be used. We developed an alternative watershed algorithm which can be used in high-dimensional spaces when a probability density estimate can be computed directly. With these tools in hand, we examined the behavioral space of fly leg postural dynamics and locomotion. We find a striking division of behavior into modes involving the fore legs and modes involving the hind legs, with few direct transitions between them. By computing behavioral clusters using the data from all flies simultaneously, we show that this division appears to be common to all flies. We also identify individual-to-individual differences in behavior and behavioral transitions. Lastly, we suggest a computational pipeline that can achieve satisfactory levels of performance without the taxing computational demands of a systematic combinatorial approach.

  9. Performance evaluation of GPU parallelization, space-time adaptive algorithms, and their combination for simulating cardiac electrophysiology.

    PubMed

    Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo

    2018-02-01

    The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Lightning criteria relative to space shuttles: Currents and electric field intensity in Florida lightning

    NASA Technical Reports Server (NTRS)

    Uman, M. A.; Mclain, D. K.

    1972-01-01

    The measured electric field intensities of 161 lightning strokes in 39 flashes which occurred between 1 and 35 km from an observation point at Kennedy Space Center, Florida during June and July of 1971 have been analyzed to determine the lightning channel currents which produced the fields. In addition, typical channel currents are derived and from these typical electric fields at distances between 0.5 and 100 km are computed and presented. On the basis of the results recommendations are made for changes in the specification of lightning properties relative to space vehicle design as given in NASA TMX-64589 (Daniels, 1971). The small sample of lightning analyzed yielded several peak currents in the 100 kA range. Several current rise-times from zero to peak of 0.5 microsec or faster were found; and the fastest observed current rate-of-rise was near 200 kA/microsec. The various sources of error are discussed.

  11. Rapid sampling of stochastic displacements in Brownian dynamics simulations

    NASA Astrophysics Data System (ADS)

    Fiore, Andrew M.; Balboa Usabiaga, Florencio; Donev, Aleksandar; Swan, James W.

    2017-03-01

    We present a new method for sampling stochastic displacements in Brownian Dynamics (BD) simulations of colloidal scale particles. The method relies on a new formulation for Ewald summation of the Rotne-Prager-Yamakawa (RPY) tensor, which guarantees that the real-space and wave-space contributions to the tensor are independently symmetric and positive-definite for all possible particle configurations. Brownian displacements are drawn from a superposition of two independent samples: a wave-space (far-field or long-ranged) contribution, computed using techniques from fluctuating hydrodynamics and non-uniform fast Fourier transforms; and a real-space (near-field or short-ranged) correction, computed using a Krylov subspace method. The combined computational complexity of drawing these two independent samples scales linearly with the number of particles. The proposed method circumvents the super-linear scaling exhibited by all known iterative sampling methods applied directly to the RPY tensor that results from the power law growth of the condition number of tensor with the number of particles. For geometrically dense microstructures (fractal dimension equal three), the performance is independent of volume fraction, while for tenuous microstructures (fractal dimension less than three), such as gels and polymer solutions, the performance improves with decreasing volume fraction. This is in stark contrast with other related linear-scaling methods such as the force coupling method and the fluctuating immersed boundary method, for which performance degrades with decreasing volume fraction. Calculations for hard sphere dispersions and colloidal gels are illustrated and used to explore the role of microstructure on performance of the algorithm. In practice, the logarithmic part of the predicted scaling is not observed and the algorithm scales linearly for up to 4 ×106 particles, obtaining speed ups of over an order of magnitude over existing iterative methods, and making the cost of computing Brownian displacements comparable to the cost of computing deterministic displacements in BD simulations. A high-performance implementation employing non-uniform fast Fourier transforms implemented on graphics processing units and integrated with the software package HOOMD-blue is used for benchmarking.

  12. Application of galvanomagnetic measurements in temperature range 70-300 K to MBE GaAs layers characterization

    NASA Astrophysics Data System (ADS)

    Wolkenberg, Andrzej; Przeslawski, Tomasz

    1996-04-01

    Galvanomagnetic measurements were performed on the square shaped samples after Van der Pauw and on the Hall bar at low electric fields app. 1.5 V/cm and magnetic induction app. 6 kG in order to make a comparison between the theoretical and experimental results of the temperature dependence of mobility and resistivity from 70 K to 300 K. A calculation method was obtained of the drift mobility and the Hall mobility in which the scatterings are applied: on ionized impurities, on polar optical phonons, on acoustic phonons (deformation potential), on acoustic phonons (piezoelectric potential) and on dislocations. The elaborated method transformed to a computer program allows us to fit experimental values of the resistivity and the Hall mobility to those calculated. The fitting procedure makes it possible to characterize the quality of the n-type GaAs MBE layer, i.e. the net electron concentration, whole ionized impurities concentration and dislocation density after Read space charge cylinders model. The calculations together with the measurements allow us to obtain compensation ratio value in the layer, too. The influence of the epitaxial layer thickness on layers measurements accuracy in the case of Van der Pauw square probe was investigated. It was stated that in the layers under 3 micrometer the bulk properties are strongly influenced by both surfaces. The results of measurements of the same layer using the Van der Pauw and the Hall bar structure were compared. It was stated that the Hall bar structure only could be used to obtain proper measurements results.

  13. Template-based procedures for neural network interpretation.

    PubMed

    Alexander, J A.; Mozer, M C.

    1999-04-01

    Although neural networks often achieve impressive learning and generalization performance, their internal workings are typically all but impossible to decipher. This characteristic of the networks, their opacity, is one of the disadvantages of connectionism compared to more traditional, rule-oriented approaches to artificial intelligence. Without a thorough understanding of the network behavior, confidence in a system's results is lowered, and the transfer of learned knowledge to other processing systems - including humans - is precluded. Methods that address the opacity problem by casting network weights in symbolic terms are commonly referred to as rule extraction techniques. This work describes a principled approach to symbolic rule extraction from standard multilayer feedforward networks based on the notion of weight templates, parameterized regions of weight space corresponding to specific symbolic expressions. With an appropriate choice of representation, we show how template parameters may be efficiently identified and instantiated to yield the optimal match to the actual weights of a unit. Depending on the requirements of the application domain, the approach can accommodate n-ary disjunctions and conjunctions with O(k) complexity, simple n-of-m expressions with O(k(2)) complexity, or more general classes of recursive n-of-m expressions with O(k(L+2)) complexity, where k is the number of inputs to an unit and L the recursion level of the expression class. Compared to other approaches in the literature, our method of rule extraction offers benefits in simplicity, computational performance, and overall flexibility. Simulation results on a variety of problems demonstrate the application of our procedures as well as the strengths and the weaknesses of our general approach.

  14. POCS-based reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE): a general algorithm for reducing motion-related artifacts

    PubMed Central

    Chu, Mei-Lan; Chang, Hing-Chiu; Chung, Hsiao-Wen; Truong, Trong-Kha; Bashir, Mustafa R.; Chen, Nan-kuei

    2014-01-01

    Purpose A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion weighted imaging (DWI). Theory Images with reduced artifacts are reconstructed with an iterative POCS procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. Methods The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved DWI data corresponding to different k-space trajectories and matrix condition numbers. Results Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. Conclusion POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods. PMID:25394325

  15. Crew appliance computer program manual, volume 1

    NASA Technical Reports Server (NTRS)

    Russell, D. J.

    1975-01-01

    Trade studies of numerous appliance concepts for advanced spacecraft galley, personal hygiene, housekeeping, and other areas were made to determine which best satisfy the space shuttle orbiter and modular space station mission requirements. Analytical models of selected appliance concepts not currently included in the G-189A Generalized Environmental/Thermal Control and Life Support Systems (ETCLSS) Computer Program subroutine library were developed. The new appliance subroutines are given along with complete analytical model descriptions, solution methods, user's input instructions, and validation run results. The appliance components modeled were integrated with G-189A ETCLSS models for shuttle orbiter and modular space station, and results from computer runs of these systems are presented.

  16. Large space structure damping design

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Haviland, J. K.

    1983-01-01

    Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.

  17. PAB3D: Its History in the Use of Turbulence Models in the Simulation of Jet and Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Pao, S. Paul; Hunter, Craig A.; Deere, Karen A.; Massey, Steven J.; Elmiligui, Alaa

    2006-01-01

    This is a review paper for PAB3D s history in the implementation of turbulence models for simulating jet and nozzle flows. We describe different turbulence models used in the simulation of subsonic and supersonic jet and nozzle flows. The time-averaged simulations use modified linear or nonlinear two-equation models to account for supersonic flow as well as high temperature mixing. Two multiscale-type turbulence models are used for unsteady flow simulations. These models require modifications to the Reynolds Averaged Navier-Stokes (RANS) equations. The first scheme is a hybrid RANS/LES model utilizing the two-equation (k-epsilon) model with a RANS/LES transition function, dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier-Stokes (PANS) formulation. All of these models are implemented in the three-dimensional Navier-Stokes code PAB3D. This paper discusses computational methods, code implementation, computed results for a wide range of nozzle configurations at various operating conditions, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions.

  18. Likelihood reconstruction method of real-space density and velocity power spectra from a redshift galaxy survey

    NASA Astrophysics Data System (ADS)

    Tang, Jiayu; Kayo, Issha; Takada, Masahiro

    2011-09-01

    We develop a maximum likelihood based method of reconstructing the band powers of the density and velocity power spectra at each wavenumber bin from the measured clustering features of galaxies in redshift space, including marginalization over uncertainties inherent in the small-scale, non-linear redshift distortion, the Fingers-of-God (FoG) effect. The reconstruction can be done assuming that the density and velocity power spectra depend on the redshift-space power spectrum having different angular modulations of μ with μ2n (n= 0, 1, 2) and that the model FoG effect is given as a multiplicative function in the redshift-space spectrum. By using N-body simulations and the halo catalogues, we test our method by comparing the reconstructed power spectra with the spectra directly measured from the simulations. For the spectrum of μ0 or equivalently the density power spectrum Pδδ(k), our method recovers the amplitudes to an accuracy of a few per cent up to k≃ 0.3 h Mpc-1 for both dark matter and haloes. For the power spectrum of μ2, which is equivalent to the density-velocity power spectrum Pδθ(k) in the linear regime, our method can recover, within the statistical errors, the input power spectrum for dark matter up to k≃ 0.2 h Mpc-1 and at both redshifts z= 0 and 1, if the adequate FoG model being marginalized over is employed. However, for the halo spectrum that is least affected by the FoG effect, the reconstructed spectrum shows greater amplitudes than the spectrum Pδθ(k) inferred from the simulations over a range of wavenumbers 0.05 ≤k≤ 0.3 h Mpc-1. We argue that the disagreement may be ascribed to a non-linearity effect that arises from the cross-bispectra of density and velocity perturbations. Using the perturbation theory and assuming Einstein gravity as in simulations, we derive the non-linear correction term to the redshift-space spectrum, and find that the leading-order correction term is proportional to μ2 and increases the μ2-power spectrum amplitudes more significantly at larger k, at lower redshifts and for more massive haloes. We find that adding the non-linearity correction term to the simulation Pδθ(k) can fairly well reproduce the reconstructed Pδθ(k) for haloes up to k≃ 0.2 h Mpc-1.

  19. Accelerating acquisition strategies for low-frequency conductivity imaging using MREIT

    NASA Astrophysics Data System (ADS)

    Song, Yizhuang; Seo, Jin Keun; Chauhan, Munish; Indahlastari, Aprinda; Ashok Kumar, Neeta; Sadleir, Rosalind

    2018-02-01

    We sought to improve efficiency of magnetic resonance electrical impedance tomography data acquisition so that fast conductivity changes or electric field variations could be monitored. Undersampling of k-space was used to decrease acquisition times in spin-echo-based sequences by a factor of two. Full MREIT data were reconstructed using continuity assumptions and preliminary scans gathered without current. We found that phase data were reconstructed faithfully from undersampled data. Conductivity reconstructions of phantom data were also possible. Therefore, undersampled k-space methods can potentially be used to accelerate MREIT acquisition. This method could be an advantage in imaging real-time conductivity changes with MREIT.

  20. Magnetic flux density reconstruction using interleaved partial Fourier acquisitions in MREIT.

    PubMed

    Park, Hee Myung; Nam, Hyun Soo; Kwon, Oh In

    2011-04-07

    Magnetic resonance electrical impedance tomography (MREIT) has been introduced as a non-invasive modality to visualize the internal conductivity and/or current density of an electrically conductive object by the injection of current. In order to measure a magnetic flux density signal in MREIT, the phase difference approach in an interleaved encoding scheme cancels the systematic artifacts accumulated in phase signals and also reduces the random noise effect. However, it is important to reduce scan duration maintaining spatial resolution and sufficient contrast, in order to allow for practical in vivo implementation of MREIT. The purpose of this paper is to develop a coupled partial Fourier strategy in the interleaved sampling in order to reduce the total imaging time for an MREIT acquisition, whilst maintaining an SNR of the measured magnetic flux density comparable to what is achieved with complete k-space data. The proposed method uses two key steps: one is to update the magnetic flux density by updating the complex densities using the partially interleaved k-space data and the other is to fill in the missing k-space data iteratively using the updated background field inhomogeneity and magnetic flux density data. Results from numerical simulations and animal experiments demonstrate that the proposed method reduces considerably the scanning time and provides resolution of the recovered B(z) comparable to what is obtained from complete k-space data.

  1. Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space

    DOEpatents

    Schach Von Wittenau, Alexis E.

    2003-01-01

    A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.

  2. Space-time interface-tracking with topology change (ST-TC)

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Buscher, Austin; Asada, Shohei

    2014-10-01

    To address the computational challenges associated with contact between moving interfaces, such as those in cardiovascular fluid-structure interaction (FSI), parachute FSI, and flapping-wing aerodynamics, we introduce a space-time (ST) interface-tracking method that can deal with topology change (TC). In cardiovascular FSI, our primary target is heart valves. The method is a new version of the deforming-spatial-domain/stabilized space-time (DSD/SST) method, and we call it ST-TC. It includes a master-slave system that maintains the connectivity of the "parent" mesh when there is contact between the moving interfaces. It is an efficient, practical alternative to using unstructured ST meshes, but without giving up on the accurate representation of the interface or consistent representation of the interface motion. We explain the method with conceptual examples and present 2D test computations with models representative of the classes of problems we are targeting.

  3. Prediction of conformationally dependent atomic multipole moments in carbohydrates

    PubMed Central

    Cardamone, Salvatore

    2015-01-01

    The conformational flexibility of carbohydrates is challenging within the field of computational chemistry. This flexibility causes the electron density to change, which leads to fluctuating atomic multipole moments. Quantum Chemical Topology (QCT) allows for the partitioning of an “atom in a molecule,” thus localizing electron density to finite atomic domains, which permits the unambiguous evaluation of atomic multipole moments. By selecting an ensemble of physically realistic conformers of a chemical system, one evaluates the various multipole moments at defined points in configuration space. The subsequent implementation of the machine learning method kriging delivers the evaluation of an analytical function, which smoothly interpolates between these points. This allows for the prediction of atomic multipole moments at new points in conformational space, not trained for but within prediction range. In this work, we demonstrate that the carbohydrates erythrose and threose are amenable to the above methodology. We investigate how kriging models respond when the training ensemble incorporating multiple energy minima and their environment in conformational space. Additionally, we evaluate the gains in predictive capacity of our models as the size of the training ensemble increases. We believe this approach to be entirely novel within the field of carbohydrates. For a modest training set size of 600, more than 90% of the external test configurations have an error in the total (predicted) electrostatic energy (relative to ab initio) of maximum 1 kJ mol−1 for open chains and just over 90% an error of maximum 4 kJ mol−1 for rings. © 2015 Wiley Periodicals, Inc. PMID:26547500

  4. Prediction of conformationally dependent atomic multipole moments in carbohydrates.

    PubMed

    Cardamone, Salvatore; Popelier, Paul L A

    2015-12-15

    The conformational flexibility of carbohydrates is challenging within the field of computational chemistry. This flexibility causes the electron density to change, which leads to fluctuating atomic multipole moments. Quantum Chemical Topology (QCT) allows for the partitioning of an "atom in a molecule," thus localizing electron density to finite atomic domains, which permits the unambiguous evaluation of atomic multipole moments. By selecting an ensemble of physically realistic conformers of a chemical system, one evaluates the various multipole moments at defined points in configuration space. The subsequent implementation of the machine learning method kriging delivers the evaluation of an analytical function, which smoothly interpolates between these points. This allows for the prediction of atomic multipole moments at new points in conformational space, not trained for but within prediction range. In this work, we demonstrate that the carbohydrates erythrose and threose are amenable to the above methodology. We investigate how kriging models respond when the training ensemble incorporating multiple energy minima and their environment in conformational space. Additionally, we evaluate the gains in predictive capacity of our models as the size of the training ensemble increases. We believe this approach to be entirely novel within the field of carbohydrates. For a modest training set size of 600, more than 90% of the external test configurations have an error in the total (predicted) electrostatic energy (relative to ab initio) of maximum 1 kJ mol(-1) for open chains and just over 90% an error of maximum 4 kJ mol(-1) for rings. © 2015 Wiley Periodicals, Inc.

  5. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  6. Modeling a Wireless Network for International Space Station

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Yaprak, Ece; Lamouri, Saad

    2000-01-01

    This paper describes the application of wireless local area network (LAN) simulation modeling methods to the hybrid LAN architecture designed for supporting crew-computing tools aboard the International Space Station (ISS). These crew-computing tools, such as wearable computers and portable advisory systems, will provide crew members with real-time vehicle and payload status information and access to digital technical and scientific libraries, significantly enhancing human capabilities in space. A wireless network, therefore, will provide wearable computer and remote instruments with the high performance computational power needed by next-generation 'intelligent' software applications. Wireless network performance in such simulated environments is characterized by the sustainable throughput of data under different traffic conditions. This data will be used to help plan the addition of more access points supporting new modules and more nodes for increased network capacity as the ISS grows.

  7. SU-E-J-10: A Moving-Blocker-Based Strategy for Simultaneous Megavoltage and Kilovoltage Scatter Correction in Cone-Beam Computed Tomography Image Acquired During Volumetric Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, L; Lee, H; Wang, J

    2014-06-01

    Purpose: To evaluate a moving-blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods: XML code was generated to enable concurrent CBCT acquisition and VMAT delivery in Varian TrueBeam developer mode. A physical attenuator (i.e., “blocker”) consisting of equal spaced lead strips (3.2mm strip width and 3.2mm gap in between) was mounted between the x-ray source and patient at a source to blocker distance of 232mm. The blocker was simulated to be moving back and forth along the gantry rotation axis during themore » CBCT acquisition. Both MV and kV scatter signal were estimated simultaneously from the blocked regions of the imaging panel, and interpolated into the un-blocked regions. Scatter corrected CBCT was then reconstructed from un-blocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan 600 phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using moving blocker for MV-kV scatter correction. Results: MV scatter greatly degrades the CBCT image quality by increasing the CT number inaccuracy and decreasing the image contrast, in addition to the shading artifacts caused by kV scatter. The artifacts were substantially reduced in the moving blocker corrected CBCT images in both Catphan and pelvis phantoms. Quantitatively, CT number error in selected regions of interest reduced from 377 in the kV-MV contaminated CBCT image to 38 for the Catphan phantom. Conclusions: The moving-blockerbased strategy can successfully correct MV and kV scatter simultaneously in CBCT projection data acquired with concurrent VMAT delivery. This work was supported in part by a grant from the Cancer Prevention and Research Institute of Texas (RP130109) and a grant from the American Cancer Society (RSG-13-326-01-CCE)« less

  8. Approximating the Generalized Voronoi Diagram of Closely Spaced Objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, John; Daniel, Eric; Pascucci, Valerio

    2015-06-22

    We present an algorithm to compute an approximation of the generalized Voronoi diagram (GVD) on arbitrary collections of 2D or 3D geometric objects. In particular, we focus on datasets with closely spaced objects; GVD approximation is expensive and sometimes intractable on these datasets using previous algorithms. With our approach, the GVD can be computed using commodity hardware even on datasets with many, extremely tightly packed objects. Our approach is to subdivide the space with an octree that is represented with an adjacency structure. We then use a novel adaptive distance transform to compute the distance function on octree vertices. Themore » computed distance field is sampled more densely in areas of close object spacing, enabling robust and parallelizable GVD surface generation. We demonstrate our method on a variety of data and show example applications of the GVD in 2D and 3D.« less

  9. Equivariant K3 invariants

    DOE PAGES

    Cheng, Miranda C. N.; Duncan, John F. R.; Harrison, Sarah M.; ...

    2017-01-01

    In this note, we describe a connection between the enumerative geometry of curves in K3 surfaces and the chiral ring of an auxiliary superconformal field theory. We consider the invariants calculated by Yau–Zaslow (capturing the Euler characters of the moduli spaces of D2-branes on curves of given genus), together with their refinements to carry additional quantum numbers by Katz–Klemm–Vafa (KKV), and Katz–Klemm–Pandharipande (KKP). We show that these invariants can be reproduced by studying the Ramond ground states of an auxiliary chiral superconformal field theory which has recently been observed to give rise to mock modular moonshine for a variety ofmore » sporadic simple groups that are subgroups of Conway’s group. We also study equivariant versions of these invariants. A K3 sigma model is specified by a choice of 4-plane in the K3 D-brane charge lattice. Symmetries of K3 sigma models are naturally identified with 4-plane preserving subgroups of the Conway group, according to the work of Gaberdiel–Hohenegger–Volpato, and one may consider corresponding equivariant refined K3 Gopakumar–Vafa invariants. The same symmetries naturally arise in the auxiliary CFT state space, affording a suggestive alternative view of the same computation. We comment on a lift of this story to the generating function of elliptic genera of symmetric products of K3 surfaces.« less

  10. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  11. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  12. Improved dynamic analysis method using load-dependent Ritz vectors

    NASA Technical Reports Server (NTRS)

    Escobedo-Torres, J.; Ricles, J. M.

    1993-01-01

    The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.

  13. Rapid and Robust Cross-Correlation-Based Seismic Phase Identification Using an Approximate Nearest Neighbor Method

    NASA Astrophysics Data System (ADS)

    Tibi, R.; Young, C. J.; Gonzales, A.; Ballard, S.; Encarnacao, A. V.

    2016-12-01

    The matched filtering technique involving the cross-correlation of a waveform of interest with archived signals from a template library has proven to be a powerful tool for detecting events in regions with repeating seismicity. However, waveform correlation is computationally expensive, and therefore impractical for large template sets unless dedicated distributed computing hardware and software are used. In this study, we introduce an Approximate Nearest Neighbor (ANN) approach that enables the use of very large template libraries for waveform correlation without requiring a complex distributed computing system. Our method begins with a projection into a reduced dimensionality space based on correlation with a randomized subset of the full template archive. Searching for a specified number of nearest neighbors is accomplished by using randomized K-dimensional trees. We used the approach to search for matches to each of 2700 analyst-reviewed signal detections reported for May 2010 for the IMS station MKAR. The template library in this case consists of a dataset of more than 200,000 analyst-reviewed signal detections for the same station from 2002-2014 (excluding May 2010). Of these signal detections, 60% are teleseismic first P, and 15% regional phases (Pn, Pg, Sn, and Lg). The analyses performed on a standard desktop computer shows that the proposed approach performs the search of the large template libraries about 20 times faster than the standard full linear search, while achieving recall rates greater than 80%, with the recall rate increasing for higher correlation values. To decide whether to confirm a match, we use a hybrid method involving a cluster approach for queries with two or more matches, and correlation score for single matches. Of the signal detections that passed our confirmation process, 52% were teleseismic first P, and 30% were regional phases.

  14. Efficient proof of ownership for cloud storage systems

    NASA Astrophysics Data System (ADS)

    Zhong, Weiwei; Liu, Zhusong

    2017-08-01

    Cloud storage system through the deduplication technology to save disk space and bandwidth, but the use of this technology has appeared targeted security attacks: the attacker can deceive the server to obtain ownership of the file by get the hash value of original file. In order to solve the above security problems and the different security requirements of the files in the cloud storage system, an efficient and information-theoretical secure proof of ownership sceme is proposed to support the file rating. Through the K-means algorithm to implement file rating, and use random seed technology and pre-calculation method to achieve safe and efficient proof of ownership scheme. Finally, the scheme is information-theoretical secure, and achieve better performance in the most sensitive areas of client-side I/O and computation.

  15. The Structure of Glycine Dihydrate: Implications for the Crystallization of Glycine from Solution and Its Structure in Outer Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Wenqian; Zhu, Qiang; Hu, Chunhua Tony

    2017-01-18

    Glycine, the simplest amino acid, is also the most polymorphous. Herein, we report the structure determination of an unknown phase of glycine which was firstly reported by Pyne and Suryanarayanan in 2001. To date, the new phase has only been prepared at 208 K as nanocrystals within ice. Through computational crystal structure prediction and powder X-ray diffraction methods, we identified this elusive phase as glycine dihydrate (GDH), representing a first report on a hydrated glycine structure. The structure of GDH has important implications for the state of glycine in aqueous solution, and the mechanisms of glycine crystallization. GDH may alsomore » be the form of glycine that comes to Earth from extraterrestrial sources.« less

  16. Computations in Plasma Physics.

    ERIC Educational Resources Information Center

    Cohen, Bruce I.; Killeen, John

    1983-01-01

    Discusses contributions of computers to research in magnetic and inertial-confinement fusion, charged-particle-beam propogation, and space sciences. Considers use in design/control of laboratory and spacecraft experiments and in data acquisition; and reviews major plasma computational methods and some of the important physics problems they…

  17. K-Nearest Neighbor Algorithm Optimization in Text Categorization

    NASA Astrophysics Data System (ADS)

    Chen, Shufeng

    2018-01-01

    K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.

  18. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  19. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE PAGES

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...

    2016-05-03

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  20. Using the area under the curve to reduce measurement error in predicting young adult blood pressure from childhood measures.

    PubMed

    Cook, Nancy R; Rosner, Bernard A; Chen, Wei; Srinivasan, Sathanur R; Berenson, Gerald S

    2004-11-30

    Tracking correlations of blood pressure, particularly childhood measures, may be attenuated by within-person variability. Combining multiple measurements can reduce this error substantially. The area under the curve (AUC) computed from longitudinal growth curve models can be used to improve the prediction of young adult blood pressure from childhood measures. Quadratic random-effects models over unequally spaced repeated measures were used to compute the area under the curve separately within the age periods 5-14 and 20-34 years in the Bogalusa Heart Study. This method adjusts for the uneven age distribution and captures the underlying or average blood pressure, leading to improved estimates of correlation and risk prediction. Tracking correlations were computed by race and gender, and were approximately 0.6 for systolic, 0.5-0.6 for K4 diastolic, and 0.4-0.6 for K5 diastolic blood pressure. The AUC can also be used to regress young adult blood pressure on childhood blood pressure and childhood and young adult body mass index (BMI). In these data, while childhood blood pressure and young adult BMI were generally directly predictive of young adult blood pressure, childhood BMI was negatively correlated with young adult blood pressure when childhood blood pressure was in the model. In addition, racial differences in young adult blood pressure were reduced, but not eliminated, after controlling for childhood blood pressure, childhood BMI, and young adult BMI, suggesting that other genetic or lifestyle factors contribute to this difference. 2004 John Wiley & Sons, Ltd.

  1. Automation in the Space Station module power management and distribution Breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Lollar, Louis F.

    1990-01-01

    The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard, located at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, models the power distribution within a Space Station Freedom Habitation or Laboratory module. Originally designed for 20 kHz ac power, the system is now being converted to high voltage dc power with power levels on a par with those expected for a space station module. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level process consists of fast, simple (from a computing standpoint) switchgear, capable of quickly safing the system. The next level consists of local load center processors called Lowest Level Processors (LLP's). These LLP's execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. The level above the LLP's contains a Communication and Algorithmic Controller (CAC) which coordinates communications with the highest level. Finally, at this highest level, three cooperating Artificial Intelligence (AI) systems manage load prioritization, load scheduling, load shedding, and fault recovery and management. The system provides an excellent venue for developing and examining advanced automation techniques. The current system and the plans for its future are examined.

  2. Space-Time Fluid-Structure Interaction Computation of Flapping-Wing Aerodynamics

    DTIC Science & Technology

    2013-12-01

    SST-VMST." The structural mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is...mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is ap- plicable to some classes of FSI...we use the ST-VMS method in combination with the ST-SUPS method. The structural mechanics computations are mostly based on the Kirchhoff –Love shell

  3. An Interdisciplinary Guided Inquiry on Estuarine Transport Using a Computer Model in High School Classrooms

    ERIC Educational Resources Information Center

    Chan, Kit Yu Karen; Yang, Sylvia; Maliska, Max E.; Grunbaum, Daniel

    2012-01-01

    The National Science Education Standards have highlighted the importance of active learning and reflection for contemporary scientific methods in K-12 classrooms, including the use of models. Computer modeling and visualization are tools that researchers employ in their scientific inquiry process, and often computer models are used in…

  4. Estimating Performance of Single Bus, Shared Memory Multiprocessors

    DTIC Science & Technology

    1987-05-01

    Chandy78] K.M. Chandy, C.M. Sauer, "Approximate methods for analyzing queuing network models of computing systems," Computing Surveys, vol10 , no 3...Denning78] P. Denning, J. Buzen, "The operational analysis of queueing network models", Computing Sur- veys, vol10 , no 3, September 1978, pp 225-261

  5. Nodal Statistics for the Van Vleck Polynomials

    NASA Astrophysics Data System (ADS)

    Bourget, Alain

    The Van Vleck polynomials naturally arise from the generalized Lamé equation as the polynomials of degree for which Eq. (1) has a polynomial solution of some degree k. In this paper, we compute the limiting distribution, as well as the limiting mean level spacings distribution of the zeros of any Van Vleck polynomial as N --> ∞.

  6. A fully parallel in time and space algorithm for simulating the electrical activity of a neural tissue.

    PubMed

    Bedez, Mathieu; Belhachmi, Zakaria; Haeberlé, Olivier; Greget, Renaud; Moussaoui, Saliha; Bouteiller, Jean-Marie; Bischoff, Serge

    2016-01-15

    The resolution of a model describing the electrical activity of neural tissue and its propagation within this tissue is highly consuming in term of computing time and requires strong computing power to achieve good results. In this study, we present a method to solve a model describing the electrical propagation in neuronal tissue, using parareal algorithm, coupling with parallelization space using CUDA in graphical processing unit (GPU). We applied the method of resolution to different dimensions of the geometry of our model (1-D, 2-D and 3-D). The GPU results are compared with simulations from a multi-core processor cluster, using message-passing interface (MPI), where the spatial scale was parallelized in order to reach a comparable calculation time than that of the presented method using GPU. A gain of a factor 100 in term of computational time between sequential results and those obtained using the GPU has been obtained, in the case of 3-D geometry. Given the structure of the GPU, this factor increases according to the fineness of the geometry used in the computation. To the best of our knowledge, it is the first time such a method is used, even in the case of neuroscience. Parallelization time coupled with GPU parallelization space allows for drastically reducing computational time with a fine resolution of the model describing the propagation of the electrical signal in a neuronal tissue. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  8. Parametric Net Influx Rate Images of 68Ga-DOTATOC and 68Ga-DOTATATE: Quantitative Accuracy and Improved Image Contrast.

    PubMed

    Ilan, Ezgi; Sandström, Mattias; Velikyan, Irina; Sundin, Anders; Eriksson, Barbro; Lubberink, Mark

    2017-05-01

    68 Ga-DOTATOC and 68 Ga-DOTATATE are radiolabeled somatostatin analogs used for the diagnosis of somatostatin receptor-expressing neuroendocrine tumors (NETs), and SUV measurements are suggested for treatment monitoring. However, changes in net influx rate ( K i ) may better reflect treatment effects than those of the SUV, and accordingly there is a need to compute parametric images showing K i at the voxel level. The aim of this study was to evaluate parametric methods for computation of parametric K i images by comparison to volume of interest (VOI)-based methods and to assess image contrast in terms of tumor-to-liver ratio. Methods: Ten patients with metastatic NETs underwent a 45-min dynamic PET examination followed by whole-body PET/CT at 1 h after injection of 68 Ga-DOTATOC and 68 Ga-DOTATATE on consecutive days. Parametric K i images were computed using a basis function method (BFM) implementation of the 2-tissue-irreversible-compartment model and the Patlak method using a descending aorta image-derived input function, and mean tumor K i values were determined for 50% isocontour VOIs and compared with K i values based on nonlinear regression (NLR) of the whole-VOI time-activity curve. A subsample of healthy liver was delineated in the whole-body and K i images, and tumor-to-liver ratios were calculated to evaluate image contrast. Correlation ( R 2 ) and agreement between VOI-based and parametric K i values were assessed using regression and Bland-Altman analysis. Results: The R 2 between NLR-based and parametric image-based (BFM) tumor K i values was 0.98 (slope, 0.81) and 0.97 (slope, 0.88) for 68 Ga-DOTATOC and 68 Ga-DOTATATE, respectively. For Patlak analysis, the R 2 between NLR-based and parametric-based (Patlak) tumor K i was 0.95 (slope, 0.71) and 0.92 (slope, 0.74) for 68 Ga-DOTATOC and 68 Ga-DOTATATE, respectively. There was no bias between NLR and parametric-based K i values. Tumor-to-liver contrast was 1.6 and 2.0 times higher in the parametric BFM K i images and 2.3 and 3.0 times in the Patlak images than in the whole-body images for 68 Ga-DOTATOC and 68 Ga-DOTATATE, respectively. Conclusion: A high R 2 and agreement between NLR- and parametric-based K i values was found, showing that K i images are quantitatively accurate. In addition, tumor-to-liver contrast was superior in the parametric K i images compared with whole-body images for both 68 Ga-DOTATOC and 68 Ga DOTATATE. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  9. Bias-Free Chemically Diverse Test Sets from Machine Learning.

    PubMed

    Swann, Ellen T; Fernandez, Michael; Coote, Michelle L; Barnard, Amanda S

    2017-08-14

    Current benchmarking methods in quantum chemistry rely on databases that are built using a chemist's intuition. It is not fully understood how diverse or representative these databases truly are. Multivariate statistical techniques like archetypal analysis and K-means clustering have previously been used to summarize large sets of nanoparticles however molecules are more diverse and not as easily characterized by descriptors. In this work, we compare three sets of descriptors based on the one-, two-, and three-dimensional structure of a molecule. Using data from the NIST Computational Chemistry Comparison and Benchmark Database and machine learning techniques, we demonstrate the functional relationship between these structural descriptors and the electronic energy of molecules. Archetypes and prototypes found with topological or Coulomb matrix descriptors can be used to identify smaller, statistically significant test sets that better capture the diversity of chemical space. We apply this same method to find a diverse subset of organic molecules to demonstrate how the methods can easily be reapplied to individual research projects. Finally, we use our bias-free test sets to assess the performance of density functional theory and quantum Monte Carlo methods.

  10. Switching Reinforcement Learning for Continuous Action Space

    NASA Astrophysics Data System (ADS)

    Nagayoshi, Masato; Murao, Hajime; Tamaki, Hisashi

    Reinforcement Learning (RL) attracts much attention as a technique of realizing computational intelligence such as adaptive and autonomous decentralized systems. In general, however, it is not easy to put RL into practical use. This difficulty includes a problem of designing a suitable action space of an agent, i.e., satisfying two requirements in trade-off: (i) to keep the characteristics (or structure) of an original search space as much as possible in order to seek strategies that lie close to the optimal, and (ii) to reduce the search space as much as possible in order to expedite the learning process. In order to design a suitable action space adaptively, we propose switching RL model to mimic a process of an infant's motor development in which gross motor skills develop before fine motor skills. Then, a method for switching controllers is constructed by introducing and referring to the “entropy”. Further, through computational experiments by using robot navigation problems with one and two-dimensional continuous action space, the validity of the proposed method has been confirmed.

  11. Simultaneous macula detection and optic disc boundary segmentation in retinal fundus images

    NASA Astrophysics Data System (ADS)

    Girard, Fantin; Kavalec, Conrad; Grenier, Sébastien; Ben Tahar, Houssem; Cheriet, Farida

    2016-03-01

    The optic disc (OD) and the macula are important structures in automatic diagnosis of most retinal diseases inducing vision defects such as glaucoma, diabetic or hypertensive retinopathy and age-related macular degeneration. We propose a new method to detect simultaneously the macula and the OD boundary. First, the color fundus images are processed to compute several maps highlighting the different anatomical structures such as vessels, the macula and the OD. Then, macula candidates and OD candidates are found simultaneously and independently using seed detectors identified on the corresponding maps. After selecting a set of macula/OD pairs, the top candidates are sent to the OD segmentation method. The segmentation method is based on local K-means applied to color coordinates in polar space followed by a polynomial fitting regularization step. Pair scores are updated, resulting in the final best macula/OD pair. The method was evaluated on two public image databases: ONHSD and MESSIDOR. The results show an overlapping area of 0.84 on ONHSD and 0.90 on MESSIDOR, which is better than recent state of the art methods. Our segmentation method is robust to contrast and illumination problems and outputs the exact boundary of the OD, not just a circular or elliptical model. The macula detection has an accuracy of 94%, which again outperforms other macula detection methods. This shows that combining the OD and macula detections improves the overall accuracy. The computation time for the whole process is 6.4 seconds, which is faster than other methods in the literature.

  12. Computational study of the thermochemistry of N₂O₅ and the kinetics of the reaction N₂O₅ + H₂O → 2 HNO₃.

    PubMed

    Alecu, I M; Marshall, Paul

    2014-12-04

    The multistructural method for torsional anharmonicity (MS-T) is employed to compute anharmonic conformationally averaged partition functions which then serve as the basis for the calculation of thermochemical parameters for N2O5 over the temperature range 0-3000 K, and thermal rate constants for the hydrolysis reaction N2O5 + H2O → 2 HNO3 over the temperature range 180-1800 K. The M06-2X hybrid meta-GGA density functional paired with the MG3S basis set is used to compute the properties of all stationary points and the energies, gradients, and Hessians of nonstationary points along the reaction path, with further energy refinement at stationary points obtained via single-point CCSD(T)-F12a/cc-pVTZ-F12 calculations including corrections for core-valence and scalar relativistic effects. The internal rotations in dinitrogen pentoxide are found to generate three structures (conformations) whose contributions are included in the partition function via the MS-T formalism, leading to a computed value for S°(298.15)(N2O5) of 353.45 J mol(-1) K(-1).This new estimate for S°(298.15)(N2O5) is used to reanalyze the equilibrium constants for the reaction NO3 + NO2 = N2O5 measured by Osthoff et al. [Phys. Chem. Chem. Phys. 2007, 9, 5785-5793] to arrive at ΔfH °(298.15) (N2O5) = 14.31 ± 0.53 kJ mol(-1)via the third law method, which compares well with our computed ab initio value of 13.53 ± 0.56 kJ mol(-1). Finally, multistructural canonical variational-transition-state theory with multidimensional tunneling (MS-CVT/MT) is used to study the kinetics for hydrolysis of N2O5 by a single water molecule, whose rate constant can be summarized by the Arrhenius expression 9.51 × 10(-17) (T/298 K)(3.354) e(-7900K/T) cm3 molecule(-1) s(-1) over the temperature range 180-1800 K.

  13. High-kVp Assisted Metal Artifact Reduction for X-ray Computed Tomography

    PubMed Central

    Xi, Yan; Jin, Yannan; De Man, Bruno; Wang, Ge

    2016-01-01

    In X-ray computed tomography (CT), the presence of metallic parts in patients causes serious artifacts and degrades image quality. Many algorithms were published for metal artifact reduction (MAR) over the past decades with various degrees of success but without a perfect solution. Some MAR algorithms are based on the assumption that metal artifacts are due only to strong beam hardening and may fail in the case of serious photon starvation. Iterative methods handle photon starvation by discarding or underweighting corrupted data, but the results are not always stable and they come with high computational cost. In this paper, we propose a high-kVp-assisted CT scan mode combining a standard CT scan with a few projection views at a high-kVp value to obtain critical projection information near the metal parts. This method only requires minor hardware modifications on a modern CT scanner. Two MAR algorithms are proposed: dual-energy normalized MAR (DNMAR) and high-energy embedded MAR (HEMAR), aiming at situations without and with photon starvation respectively. Simulation results obtained with the CT simulator CatSim demonstrate that the proposed DNMAR and HEMAR methods can eliminate metal artifacts effectively. PMID:27891293

  14. GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition

    NASA Astrophysics Data System (ADS)

    Zhen, Z.; Jia, X.

    2014-12-01

    Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the speedup ratio time consumption of RTM is 11.5. At the same time, the accuracy of imaging is not harmed. Another advantage of the GPUs-GPP method is its easy applications in other numerical methods such as the FEM. Finally, in the GPUs-GPP method, the arrays require quite limited memory storage, which makes the method promising in dealing with large-scale 3D problems.

  15. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    PubMed

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  16. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Technical Reports Server (NTRS)

    Kallinderis, Y.

    1995-01-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  17. Accelerating Dynamic Magnetic Resonance Imaging (MRI) for Lung Tumor Tracking Based on Low-Rank Decomposition in the Spatial–Temporal Domain: A Feasibility Study Based on Simulation and Preliminary Prospective Undersampled MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarma, Manoj; Department of Radiation Oncology, University of California, Los Angeles, California; Hu, Peng

    Purpose: To evaluate a low-rank decomposition method to reconstruct down-sampled k-space data for the purpose of tumor tracking. Methods and Materials: Seven retrospective lung cancer patients were included in the simulation study. The fully-sampled k-space data were first generated from existing 2-dimensional dynamic MR images and then down-sampled by 5 × -20 × before reconstruction using a Cartesian undersampling mask. Two methods, a low-rank decomposition method using combined dynamic MR images (k-t SLR based on sparsity and low-rank penalties) and a total variation (TV) method using individual dynamic MR frames, were used to reconstruct images. The tumor trajectories were derived on the basis ofmore » autosegmentation of the resultant images. To further test its feasibility, k-t SLR was used to reconstruct prospective data of a healthy subject. An undersampled balanced steady-state free precession sequence with the same undersampling mask was used to acquire the imaging data. Results: In the simulation study, higher imaging fidelity and low noise levels were achieved with the k-t SLR compared with TV. At 10 × undersampling, the k-t SLR method resulted in an average normalized mean square error <0.05, as opposed to 0.23 by using the TV reconstruction on individual frames. Less than 6% showed tracking errors >1 mm with 10 × down-sampling using k-t SLR, as opposed to 17% using TV. In the prospective study, k-t SLR substantially reduced reconstruction artifacts and retained anatomic details. Conclusions: Magnetic resonance reconstruction using k-t SLR on highly undersampled dynamic MR imaging data results in high image quality useful for tumor tracking. The k-t SLR was superior to TV by better exploiting the intrinsic anatomic coherence of the same patient. The feasibility of k-t SLR was demonstrated by prospective imaging acquisition and reconstruction.« less

  18. Covariance estimation in Terms of Stokes Parameters with Application to Vector Sensor Imaging

    DTIC Science & Technology

    2016-12-15

    S. Klein, “HF Vector Sensor for Radio Astronomy : Ground Testing Results,” in AIAA SPACE 2016, ser. AIAA SPACE Forum, American Institute of... astronomy ,” in 2016 IEEE Aerospace Conference, Mar. 2016, pp. 1–17. doi: 10.1109/ AERO.2016.7500688. [4] K.-C. Ho, K.-C. Tan, and B. T. G. Tan, “Estimation of...Statistical Imaging in Radio Astronomy via an Expectation-Maximization Algorithm for Structured Covariance Estimation,” in Statistical Methods in Imaging: IN

  19. Thermal conductivity of Rene 41 honeycomb panels. [space transportation vehicles

    NASA Technical Reports Server (NTRS)

    Deriugin, V.

    1980-01-01

    Effective thermal conductivities of Rene 41 panels suitable for advanced space transportation vehicle structures were determined analytically and experimentally for temperature ranges between 20.4K (423 F) and 1186K (1675 F). The cryogenic data were obtained using a cryostat whereas the high temperature data were measured using a heat flow meter and a comparative thermal conductivity instrument respectively. Comparisons were made between analysis and experimental data. Analytical methods appear to provide reasonable definition of the honeycomb panel effective thermal conductivities.

  20. 3-DIMENSIONAL Optoelectronic

    NASA Astrophysics Data System (ADS)

    Krishnamoorthy, Ashok Venketaraman

    This thesis covers the design, analysis, optimization, and implementation of optoelectronic (N,M,F) networks. (N,M,F) networks are generic space-division networks that are well suited to implementation using optoelectronic integrated circuits and free-space optical interconnects. An (N,M,F) networks consists of N input channels each having a fanout F_{rm o}, M output channels each having a fanin F_{rm i}, and Log_{rm K}(N/F) stages of K x K switches. The functionality of the fanout, switching, and fanin stages depends on the specific application. Three applications of optoelectronic (N,M,F) networks are considered. The first is an optoelectronic (N,1,1) content -addressable memory system that achieves associative recall on two-dimensional images retrieved from a parallel-access optical memory. The design and simulation of the associative memory are discussed, and an experimental emulation of a prototype system using images from a parallel-readout optical disk is presented. The system design provides superior performance to existing electronic content-addressable memory chips in terms of capacity and search rate, and uses readily available optical disk and VLSI technologies. Next, a scalable optoelectronic (N,M,F) neural network that uses free-space holographic optical interconnects is presented. The neural architecture minimizes the number of optical transmitters needed, and provides accurate electronic fanin with low signal skew, and dendritic-type fan-in processing capability in a compact layout. Optimal data-encoding methods and circuit techniques are discussed. The implementation of an prototype optoelectronic neural system, and its application to a simple recognition task is demonstrated. Finally, the design, analysis, and optimization of a (N,N,F) self-routing, packet-switched multistage interconnection network is described. The network is suitable for parallel computing and broadband switching applications. The tradeoff between optical and electronic interconnects is examined quantitatively by varying the electronic switch size K. The performance of the (N,N,F) network versus the fanning parameter F, is also analyzed. It is shown that the optoelectronic (N,N,F) networks provide a range of performance-cost alternatives, and offer superior performance-per-cost to fully electronic switching networks and to previous networks designs.

Top