Sample records for k-space computational method

  1. MR thermometry characterization of a hyperthermia ultrasound array designed using the k-space computational method

    PubMed Central

    Al-Bataineh, Osama M; Collins, Christopher M; Park, Eun-Joo; Lee, Hotaik; Smith, Nadine Barrie

    2006-01-01

    Background Ultrasound induced hyperthermia is a useful adjuvant to radiation therapy in the treatment of prostate cancer. A uniform thermal dose (43°C for 30 minutes) is required within the targeted cancerous volume for effective therapy. This requires specific ultrasound phased array design and appropriate thermometry method. Inhomogeneous, acoustical, three-dimensional (3D) prostate models and economical computational methods provide necessary tools to predict the appropriate shape of hyperthermia phased arrays for better focusing. This research utilizes the k-space computational method and a 3D human prostate model to design an intracavitary ultrasound probe for hyperthermia treatment of prostate cancer. Evaluation of the probe includes ex vivo and in vivo controlled hyperthermia experiments using the noninvasive magnetic resonance imaging (MRI) thermometry. Methods A 3D acoustical prostate model was created using photographic data from the Visible Human Project®. The k-space computational method was used on this coarse grid and inhomogeneous tissue model to simulate the steady state pressure wavefield of the designed phased array using the linear acoustic wave equation. To ensure the uniformity and spread of the pressure in the length of the array, and the focusing capability in the width of the array, the equally-sized elements of the 4 × 20 elements phased array were 1 × 14 mm. A probe was constructed according to the design in simulation using lead zerconate titanate (PZT-8) ceramic and a Delrin® plastic housing. Noninvasive MRI thermometry and a switching feedback controller were used to accomplish ex vivo and in vivo hyperthermia evaluations of the probe. Results Both exposimetry and k-space simulation results demonstrated acceptable agreement within 9%. With a desired temperature plateau of 43.0°C, ex vivo and in vivo controlled hyperthermia experiments showed that the MRI temperature at the steady state was 42.9 ± 0.38°C and 43.1 ± 0.80

  2. A k-Space Method for Moderately Nonlinear Wave Propagation

    PubMed Central

    Jing, Yun; Wang, Tianren; Clement, Greg T.

    2013-01-01

    A k-space method for moderately nonlinear wave propagation in absorptive media is presented. The Westervelt equation is first transferred into k-space via Fourier transformation, and is solved by a modified wave-vector time-domain scheme. The present approach is not limited to forward propagation or parabolic approximation. One- and two-dimensional problems are investigated to verify the method by comparing results to analytic solutions and finite-difference time-domain (FDTD) method. It is found that to obtain accurate results in homogeneous media, the grid size can be as little as two points per wavelength, and for a moderately nonlinear problem, the Courant–Friedrichs–Lewy number can be as large as 0.4. Through comparisons with the conventional FDTD method, the k-space method for nonlinear wave propagation is shown here to be computationally more efficient and accurate. The k-space method is then employed to study three-dimensional nonlinear wave propagation through the skull, which shows that a relatively accurate focusing can be achieved in the brain at a high frequency by sending a low frequency from the transducer. Finally, implementations of the k-space method using a single graphics processing unit shows that it required about one-seventh the computation time of a single-core CPU calculation. PMID:22899114

  3. MR thermometry characterization of a hyperthermia ultrasound array designed using the k-space computational method.

    PubMed

    Al-Bataineh, Osama M; Collins, Christopher M; Park, Eun-Joo; Lee, Hotaik; Smith, Nadine Barrie

    2006-10-25

    Ultrasound induced hyperthermia is a useful adjuvant to radiation therapy in the treatment of prostate cancer. A uniform thermal dose (43 degrees C for 30 minutes) is required within the targeted cancerous volume for effective therapy. This requires specific ultrasound phased array design and appropriate thermometry method. Inhomogeneous, acoustical, three-dimensional (3D) prostate models and economical computational methods provide necessary tools to predict the appropriate shape of hyperthermia phased arrays for better focusing. This research utilizes the k-space computational method and a 3D human prostate model to design an intracavitary ultrasound probe for hyperthermia treatment of prostate cancer. Evaluation of the probe includes ex vivo and in vivo controlled hyperthermia experiments using the noninvasive magnetic resonance imaging (MRI) thermometry. A 3D acoustical prostate model was created using photographic data from the Visible Human Project. The k-space computational method was used on this coarse grid and inhomogeneous tissue model to simulate the steady state pressure wavefield of the designed phased array using the linear acoustic wave equation. To ensure the uniformity and spread of the pressure in the length of the array, and the focusing capability in the width of the array, the equally-sized elements of the 4 x 20 elements phased array were 1 x 14 mm. A probe was constructed according to the design in simulation using lead zerconate titanate (PZT-8) ceramic and a Delrin plastic housing. Noninvasive MRI thermometry and a switching feedback controller were used to accomplish ex vivo and in vivo hyperthermia evaluations of the probe. Both exposimetry and k-space simulation results demonstrated acceptable agreement within 9%. With a desired temperature plateau of 43.0 degrees C, ex vivo and in vivo controlled hyperthermia experiments showed that the MRI temperature at the steady state was 42.9 +/- 0.38 degrees C and 43.1 +/- 0.80 degrees C

  4. A k-space method for large-scale models of wave propagation in tissue.

    PubMed

    Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C

    2001-03-01

    Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2

  5. Time-reversal transcranial ultrasound beam focusing using a k-space method

    PubMed Central

    Jing, Yun; Meral, F. Can; Clement, Greg. T.

    2012-01-01

    This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477

  6. Utilization of the k-space Computational Method to Design an Intracavitary Transrectal Ultrasound Phased Array Applicator for Hyperthermia Treatment of Prostate Cancer

    NASA Astrophysics Data System (ADS)

    Al-Bataineh, Osama M.; Collins, Christopher M.; Sparrow, Victor W.; Keolian, Robert M.; Smith, Nadine Barrie

    2006-05-01

    This research utilizes the k-space computational method to design an intracavitary probe for hyperthermia treatment of prostate cancer. A three-dimensional (3D) photographical prostate model, utilizing imaging data from the Visible Human Project®, was the basis for inhomogeneous acoustical model development. The acoustical model accounted for sound speed, density, and absorption variations. The k-space computational method was used to simulate ultrasound wave propagation of the designed phased array through the acoustical model. To insure the uniformity and spread of the pressure in the length of the array, and the steering and focusing capability in the width of the array, the equal-sized elements of the phased array were 1 × 14 mm. The anatomical measurements of the prostate were used to predict the final phased array specifications (4 × 20 planar array, 1.2 MHz, element size = 1 × 14 mm, array size = 56 × 20 mm). Good agreement between the exposimetry and the k-space results was achieved. As an example, the -3 dB distances of the focal volume were differing by 9.1% in the propagation direction for k-space prostate simulation and exposimetry results. Temperature simulations indicated that the rectal wall temperature was elevated less than 2°C during hyperthermia treatment. Steering and focusing ability of the designed probe, in both azimuth and propagation directions, were found to span the entire prostate volume with minimal grating lobes (-10 dB reduction from the main lobe) and least heat damage to the rectal wall. Evaluations of the probe included ex vivo and in vivo controlled experiments to deliver the required thermal dose to the targeted tissue. With a desired temperature plateau of 43.0°C, the MRI temperature results at the steady state were 42.9 ± 0.38°C and 43.1 ± 0.80°C for ex vivo and in vivo experiments, respectively. Unlike conventional computational methods, the k-space method provides a powerful tool to predict pressure wavefield and

  7. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  8. Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T

    2012-06-01

    The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.

  9. Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging

    PubMed Central

    Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.

    2014-01-01

    Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602

  10. Analytical Bistatic k Space Images Compared to Experimental Swept Frequency EAR Images

    NASA Technical Reports Server (NTRS)

    Shaeffer, John; Cooper, Brett; Hom, Kam

    2004-01-01

    A case study of flat plate scattering images obtained by the analytical bistatic k space and experimental swept frequency ISAR methods is presented. The key advantage of the bistatic k space image is that a single excitation is required, i.e., one frequency I one angle. This means that prediction approaches such as MOM only need to compute one solution at a single frequency. Bistatic image Fourier transform data are obtained by computing the scattered field at various bistatic positions about the body in k space. Experimental image Fourier transform data are obtained from the measured response to a bandwidth of frequencies over a target rotation range.

  11. Computational studies of Ras and PI3K

    NASA Technical Reports Server (NTRS)

    Ren, Lei; Cucinotta, Francis A.

    2004-01-01

    Until recently, experimental techniques in molecular cell biology have been the primary means to investigate biological risk upon space radiation. However, computational modeling provides an alternative theoretical approach, which utilizes various computational tools to simulate proteins, nucleotides, and their interactions. In this study, we are focused on using molecular mechanics (MM) and molecular dynamics (MD) to study the mechanism of protein-protein binding and to estimate the binding free energy between proteins. Ras is a key element in a variety of cell processes, and its activation of phosphoinositide 3-kinase (PI3K) is important for survival of transformed cells. Different computational approaches for this particular study are presented to calculate the solvation energies and binding free energies of H-Ras and PI3K. The goal of this study is to establish computational methods to investigate the roles of different proteins played in the cellular responses to space radiation, including modification of protein function through gene mutation, and to support the studies in molecular cell biology and theoretical kinetics models for our risk assessment project.

  12. A New Soft Computing Method for K-Harmonic Means Clustering.

    PubMed

    Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe

    2016-01-01

    The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.

  13. Advances in locally constrained k-space-based parallel MRI.

    PubMed

    Samsonov, Alexey A; Block, Walter F; Arunachalam, Arjun; Field, Aaron S

    2006-02-01

    In this article, several theoretical and methodological developments regarding k-space-based, locally constrained parallel MRI (pMRI) reconstruction are presented. A connection between Parallel MRI with Adaptive Radius in k-Space (PARS) and GRAPPA methods is demonstrated. The analysis provides a basis for unified treatment of both methods. Additionally, a weighted PARS reconstruction is proposed, which may absorb different weighting strategies for improved image reconstruction. Next, a fast and efficient method for pMRI reconstruction of data sampled on non-Cartesian trajectories is described. In the new technique, the computational burden associated with the numerous matrix inversions in the original PARS method is drastically reduced by limiting direct calculation of reconstruction coefficients to only a few reference points. The rest of the coefficients are found by interpolating between the reference sets, which is possible due to the similar configuration of points participating in reconstruction for highly symmetric trajectories, such as radial and spirals. As a result, the time requirements are drastically reduced, which makes it practical to use pMRI with non-Cartesian trajectories in many applications. The new technique was demonstrated with simulated and actual data sampled on radial trajectories. Copyright 2006 Wiley-Liss, Inc.

  14. A new method for computing the reliability of consecutive k-out-of-n:F systems

    NASA Astrophysics Data System (ADS)

    Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak

    2016-01-01

    In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  15. Analysis and optimization of cyclic methods in orbit computation

    NASA Technical Reports Server (NTRS)

    Pierce, S.

    1973-01-01

    The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

  16. Computational methods and software systems for dynamics and control of large space structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.

    1990-01-01

    This final report on computational methods and software systems for dynamics and control of large space structures covers progress to date, projected developments in the final months of the grant, and conclusions. Pertinent reports and papers that have not appeared in scientific journals (or have not yet appeared in final form) are enclosed. The grant has supported research in two key areas of crucial importance to the computer-based simulation of large space structure. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area, as reported here, involves massively parallel computers.

  17. The construction of space-like surfaces with k1k2 - m(k1 + k2) = 1 in Minkowski three-space

    NASA Astrophysics Data System (ADS)

    Cao, Xi-Fang

    2002-07-01

    From solutions of the sinh-Laplace equation, we construct a family of space-like surfaces with k1k2 - m(k1 + k2) = 1 in Minkowski three-space, where k1 and k2 are principal curvatures and m is an arbitrary constant.

  18. Computational method for determining n and k for a thin film from the measured reflectance, transmittance, and film thickness.

    PubMed

    Bennett, J M; Booty, M J

    1966-01-01

    A computational method of determining n and k for an evaporated film from the measured reflectance, transmittance, and film thickness has been programmed for an IBM 7094 computer. The method consists of modifications to the NOTS multilayer film program. The basic program computes normal incidence reflectance, transmittance, phase change on reflection, and other parameters from the optical constants and thicknesses of all materials. In the modification, n and k for the film are varied in a prescribed manner, and the computer picks from among these values one n and one k which yield reflectance and transmittance values almost equalling the measured values. Results are given for films of silicon and aluminum.

  19. Calculation reduction method for color digital holography and computer-generated hologram using color space conversion

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Nagahama, Yuki; Kakue, Takashi; Takada, Naoki; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Hiyama, Daisuke; Ito, Tomoyoshi

    2014-02-01

    A calculation reduction method for color digital holography (DH) and computer-generated holograms (CGHs) using color space conversion is reported. Color DH and color CGHs are generally calculated on RGB space. We calculate color DH and CGHs in other color spaces for accelerating the calculation (e.g., YCbCr color space). In YCbCr color space, a RGB image or RGB hologram is converted to the luminance component (Y), blue-difference chroma (Cb), and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color DH and CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space. The proposed method, which is possible to accelerate the calculations up to a factor of 3 in theory, accelerates the calculation over two times faster than the ones in RGB color space.

  20. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  1. A singular K-space model for fast reconstruction of magnetic resonance images from undersampled data.

    PubMed

    Luo, Jianhua; Mou, Zhiying; Qin, Binjie; Li, Wanqing; Ogunbona, Philip; Robini, Marc C; Zhu, Yuemin

    2018-07-01

    Reconstructing magnetic resonance images from undersampled k-space data is a challenging problem. This paper introduces a novel method of image reconstruction from undersampled k-space data based on the concept of singularizing operators and a novel singular k-space model. Exploring the sparsity of an image in the k-space, the singular k-space model (SKM) is proposed in terms of the k-space functions of a singularizing operator. The singularizing operator is constructed by combining basic difference operators. An algorithm is developed to reliably estimate the model parameters from undersampled k-space data. The estimated parameters are then used to recover the missing k-space data through the model, subsequently achieving high-quality reconstruction of the image using inverse Fourier transform. Experiments on physical phantom and real brain MR images have shown that the proposed SKM method constantly outperforms the popular total variation (TV) and the classical zero-filling (ZF) methods regardless of the undersampling rates, the noise levels, and the image structures. For the same objective quality of the reconstructed images, the proposed method requires much less k-space data than the TV method. The SKM method is an effective method for fast MRI reconstruction from the undersampled k-space data. Graphical abstract Two Real Images and their sparsified images by singularizing operator.

  2. [Optimal scan parameters for a method of k-space trajectory (radial scan method) in evaluation of carotid plaque characteristics].

    PubMed

    Nakamura, Manami; Makabe, Takeshi; Tezuka, Hideomi; Miura, Takahiro; Umemura, Takuma; Sugimori, Hiroyuki; Sakata, Motomichi

    2013-04-01

    The purpose of this study was to optimize scan parameters for evaluation of carotid plaque characteristics by k-space trajectory (radial scan method), using a custom-made carotid plaque phantom. The phantom was composed of simulated sternocleidomastoid muscle and four types of carotid plaque. The effect of chemical shift artifact was compared using T1 weighted images (T1WI) of the phantom obtained with and without fat suppression, and using two types of k-space trajectory (the radial scan method and the Cartesian method). The ratio of signal intensity of simulated sternocleidomastoid muscle to the signal intensity of hematoma, blood (including heparin), lard, and mayonnaise was compared among various repetition times (TR) using T1WI and T2 weighted imaging (T2WI). In terms of chemical shift artifacts, image quality was improved using fat suppression for both the radial scan and Cartesian methods. In terms of signal ratio, the highest values were obtained for the radial scan method with TR of 500 ms for T1WI, and TR of 3000 ms for T2WI. For evaluation of carotid plaque characteristics using the radial scan method, chemical shift artifacts were reduced with fat suppression. Signal ratio was improved by optimizing the TR settings for T1WI and T2WI. These results suggest the potential for using magnetic resonance imaging for detailed evaluation of carotid plaque.

  3. Computer modeling of gastric parietal cell: significance of canalicular space, gland lumen, and variable canalicular [K+].

    PubMed

    Crothers, James M; Forte, John G; Machen, Terry E

    2016-05-01

    A computer model, constructed for evaluation of integrated functioning of cellular components involved in acid secretion by the gastric parietal cell, has provided new interpretations of older experimental evidence, showing the functional significance of a canalicular space separated from a mucosal bath by a gland lumen and also shedding light on basolateral Cl(-) transport. The model shows 1) changes in levels of parietal cell secretion (with stimulation or H-K-ATPase inhibitors) result mainly from changes in electrochemical driving forces for apical K(+) and Cl(-) efflux, as canalicular [K(+)] ([K(+)]can) increases or decreases with changes in apical H(+)/K(+) exchange rate; 2) H-K-ATPase inhibition in frog gastric mucosa would increase [K(+)]can similarly with low or high mucosal [K(+)], depolarizing apical membrane voltage similarly, so electrogenic H(+) pumping is not indicated by inhibition causing similar increase in transepithelial potential difference (Vt) with 4 and 80 mM mucosal K(+); 3) decreased H(+) secretion during strongly mucosal-positive voltage clamping is consistent with an electroneutral H-K-ATPase being inhibited by greatly decreased [K(+)]can (Michaelis-Menten mechanism); 4) slow initial change ("long time-constant transient") in current or Vt with clamping of Vt or current involves slow change in [K(+)]can; 5) the Na(+)-K(+)-2Cl(-) symporter (NKCC) is likely to have a significant role in Cl(-) influx, despite evidence that it is not necessary for acid secretion; and 6) relative contributions of Cl(-)/HCO3 (-) exchanger (AE2) and NKCC to Cl(-) influx would differ greatly between resting and stimulated states, possibly explaining reported differences in physiological characteristics of stimulated open-circuit Cl(-) secretion (≈H(+)) and resting short-circuit Cl(-) secretion (>H(+)). Copyright © 2016 the American Physiological Society.

  4. Nonlinear Aeroacoustics Computations by the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.

    2003-01-01

    The Space-Time Conservation Element and Solution Element Method, or CE/SE Method for short, is a recently developed numerical method for conservation laws. Despite its second order accuracy in space and time, it possesses low dispersion errors and low dissipation. The method is robust enough to cover a wide range of compressible flows: from weak linear acoustic waves to strong discontinuous waves (shocks). An outstanding feature of the CE/SE scheme is its truly multi-dimensional, simple but effective non-reflecting boundary condition (NRBC), which is particularly valuable for computational aeroacoustics (CAA). In nature, the method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its careful treatment of the surface fluxes and geometry, it is different from the existing schemes. Currently, the CE/SE scheme has been developed to a matured stage that a 3-D unstructured CE/SE Navier-Stokes solver is already available. However, in the present review paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen and sketched in section 2. Then applications of the 2-D and 3-D CE/SE schemes to linear, and in particular, nonlinear aeroacoustics are depicted in sections 3, 4, and 5 to demonstrate its robustness and capability.

  5. The k-space origins of scattering in Bi2Sr2CaCu2O8+x

    NASA Astrophysics Data System (ADS)

    Alldredge, Jacob W.; Calleja, Eduardo M.; Dai, Jixia; Eisaki, H.; Uchida, S.; McElroy, Kyle

    2013-08-01

    We demonstrate a general, computer automated procedure that inverts the reciprocal space scattering data (q-space) that are measured by spectroscopic imaging scanning tunnelling microscopy (SI-STM) in order to determine the momentum space (k-space) scattering structure. This allows a detailed examination of the k-space origins of the quasiparticle interference (QPI) pattern in Bi2Sr2CaCu2O8+x within the theoretical constraints of the joint density of states (JDOS). Our new method allows measurement of the differences between the positive and negative energy dispersions, the gap structure and an energy dependent scattering length scale. Furthermore, it resolves the transition between the dispersive QPI and the checkerboard ({q}_{1}^{\\ast } excitation). We have measured the k-space scattering structure over a wide range of doping (p ˜ 0.22-0.08), including regions where the octet model is not applicable. Our technique allows the complete mapping of the k-space scattering origins of the spatial excitations in Bi2Sr2CaCu2O8+x, which allows for better comparisons between SI-STM and other experimental probes of the band structure. By applying our new technique to such a heavily studied compound, we can validate our new general approach for determining the k-space scattering origins from SI-STM data.

  6. The k-space origins of scattering in Bi2Sr2CaCu2O8+x.

    PubMed

    Alldredge, Jacob W; Calleja, Eduardo M; Dai, Jixia; Eisaki, H; Uchida, S; McElroy, Kyle

    2013-08-21

    We demonstrate a general, computer automated procedure that inverts the reciprocal space scattering data (q-space) that are measured by spectroscopic imaging scanning tunnelling microscopy (SI-STM) in order to determine the momentum space (k-space) scattering structure. This allows a detailed examination of the k-space origins of the quasiparticle interference (QPI) pattern in Bi2Sr2CaCu2O8+x within the theoretical constraints of the joint density of states (JDOS). Our new method allows measurement of the differences between the positive and negative energy dispersions, the gap structure and an energy dependent scattering length scale. Furthermore, it resolves the transition between the dispersive QPI and the checkerboard ([Formula: see text] excitation). We have measured the k-space scattering structure over a wide range of doping (p ∼ 0.22-0.08), including regions where the octet model is not applicable. Our technique allows the complete mapping of the k-space scattering origins of the spatial excitations in Bi2Sr2CaCu2O8+x, which allows for better comparisons between SI-STM and other experimental probes of the band structure. By applying our new technique to such a heavily studied compound, we can validate our new general approach for determining the k-space scattering origins from SI-STM data.

  7. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    PubMed

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  8. Geometric shapes inversion method of space targets by ISAR image segmentation

    NASA Astrophysics Data System (ADS)

    Huo, Chao-ying; Xing, Xiao-yu; Yin, Hong-cheng; Li, Chen-guang; Zeng, Xiang-yun; Xu, Gao-gui

    2017-11-01

    The geometric shape of target is an effective characteristic in the process of space targets recognition. This paper proposed a method of shape inversion of space target based on components segmentation from ISAR image. The Radon transformation, Hough transformation, K-means clustering, triangulation will be introduced into ISAR image processing. Firstly, we use Radon transformation and edge detection to extract space target's main body spindle and solar panel spindle from ISAR image. Then the targets' main body, solar panel, rectangular and circular antenna are segmented from ISAR image based on image detection theory. Finally, the sizes of every structural component are computed. The effectiveness of this method is verified using typical targets' simulation data.

  9. Computational Aeroacoustics by the Space-time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.

    2001-01-01

    In recent years, a new numerical methodology for conservation laws-the Space-Time Conservation Element and Solution Element Method (CE/SE), was developed by Dr. Chang of NASA Glenn Research Center and collaborators. In nature, the new method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its rigorous treatment of the fluxes and geometry, it is different from the existing schemes. The CE/SE scheme features: (1) space and time treated on the same footing, the integral equations of conservation laws are solve( for with second order accuracy, (2) high resolution, low dispersion and low dissipation, (3) novel, truly multi-dimensional, simple but effective non-reflecting boundary condition, (4) effortless implementation of computation, no numerical fix or parameter choice is needed, an( (5) robust enough to cover a wide spectrum of compressible flow: from weak linear acoustic waves to strong, discontinuous waves (shocks) appropriate for linear and nonlinear aeroacoustics. Currently, the CE/SE scheme has been developed to such a stage that a 3-13 unstructured CE/SE Navier-Stokes solver is already available. However, in the present paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen as a prototype and is sketched in Section 2. Then applications of the CE/SE scheme to linear, nonlinear aeroacoustics and airframe noise are depicted in Sections 3, 4, and 5 respectively to demonstrate its robustness and capability.

  10. An empirical method for computing leeside centerline heating on the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Helms, V. T., III

    1981-01-01

    An empirical method is presented for computing top centerline heating on the Space Shuttle Orbiter at simulated reentry conditions. It is shown that the Shuttle's top centerline can be thought of as being under the influence of a swept cylinder flow field. The effective geometry of the flow field, as well as top centerline heating, are directly related to oil-flow patterns on the upper surface of the fuselage. An empirical turbulent swept cylinder heating method was developed based on these considerations. The method takes into account the effects of the vortex-dominated leeside flow field without actually having to compute the detailed properties of such a complex flow. The heating method closely predicts experimental heat-transfer values on the top centerline of a Shuttle model at Mach numbers of 6 and 10 over a wide range in Reynolds number and angle of attack.

  11. Computing Interactions Of Free-Space Radiation With Matter

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.; Townsend, L. W.; Badavi, F. F.; Tripathi, R. K.; Silberberg, R.; Tsao, C. H.; Badwar, G. D.

    1995-01-01

    High Charge and Energy Transport (HZETRN) computer program computationally efficient, user-friendly package of software adressing problem of transport of, and shielding against, radiation in free space. Designed as "black box" for design engineers not concerned with physics of underlying atomic and nuclear radiation processes in free-space environment, but rather primarily interested in obtaining fast and accurate dosimetric information for design and construction of modules and devices for use in free space. Computational efficiency achieved by unique algorithm based on deterministic approach to solution of Boltzmann equation rather than computationally intensive statistical Monte Carlo method. Written in FORTRAN.

  12. Self-calibrated correlation imaging with k-space variant correlation functions.

    PubMed

    Li, Yu; Edalati, Masoud; Du, Xingfu; Wang, Hui; Cao, Jie J

    2018-03-01

    Correlation imaging is a previously developed high-speed MRI framework that converts parallel imaging reconstruction into the estimate of correlation functions. The presented work aims to demonstrate this framework can provide a speed gain over parallel imaging by estimating k-space variant correlation functions. Because of Fourier encoding with gradients, outer k-space data contain higher spatial-frequency image components arising primarily from tissue boundaries. As a result of tissue-boundary sparsity in the human anatomy, neighboring k-space data correlation varies from the central to the outer k-space. By estimating k-space variant correlation functions with an iterative self-calibration method, correlation imaging can benefit from neighboring k-space data correlation associated with both coil sensitivity encoding and tissue-boundary sparsity, thereby providing a speed gain over parallel imaging that relies only on coil sensitivity encoding. This new approach is investigated in brain imaging and free-breathing neonatal cardiac imaging. Correlation imaging performs better than existing parallel imaging techniques in simulated brain imaging acceleration experiments. The higher speed enables real-time data acquisition for neonatal cardiac imaging in which physiological motion is fast and non-periodic. With k-space variant correlation functions, correlation imaging gives a higher speed than parallel imaging and offers the potential to image physiological motion in real-time. Magn Reson Med 79:1483-1494, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Higher-order accurate space-time schemes for computational astrophysics—Part I: finite volume methods

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.

    2017-12-01

    As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.

  14. Methods for computing color anaglyphs

    NASA Astrophysics Data System (ADS)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  15. TH-EF-BRA-06: A Novel Retrospective 3D K-Space Sorting 4D-MRI Technique Using a Radial K-Space Acquisition MRI Sequence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Subashi, E; Yin, F

    Purpose: Current retrospective 4D-MRI provides superior tumor-to-tissue contrast and accurate respiratory motion information for radiotherapy motion management. The developed 4D-MRI techniques based on 2D-MRI image sorting require a high frame-rate of the MR sequences. However, several MRI sequences provide excellent image quality but have low frame-rate. This study aims at developing a novel retrospective 3D k-space sorting 4D-MRI technique using radial k-space acquisition MRI sequences to improve 4D-MRI image quality and temporal-resolution for imaging irregular organ/tumor respiratory motion. Methods: The method is based on a RF-spoiled, steady-state, gradient-recalled sequence with minimal echo time. A 3D radial k-space data acquisition trajectorymore » was used for sampling the datasets. Each radial spoke readout data line starts from the 3D center of Field-of-View. Respiratory signal can be extracted from the k-space center data point of each spoke. The spoke data was sorted based on its self-synchronized respiratory signal using phase sorting. Subsequently, 3D reconstruction was conducted to generate the time-resolved 4D-MRI images. As a feasibility study, this technique was implemented on a digital human phantom XCAT. The respiratory motion was controlled by an irregular motion profile. To validate using k-space center data as a respiratory surrogate, we compared it with the XCAT input controlling breathing profile. Tumor motion trajectories measured on reconstructed 4D-MRI were compared to the average input trajectory. The mean absolute amplitude difference (D) was calculated. Results: The signal extracted from k-space center data matches well with the input controlling respiratory profile of XCAT. The relative amplitude error was 8.6% and the relative phase error was 3.5%. XCAT 4D-MRI demonstrated a clear motion pattern with little serrated artifacts. D of tumor trajectories was 0.21mm, 0.23mm and 0.23mm in SI, AP and ML directions, respectively

  16. Wavelength calibration of dispersive near-infrared spectrometer using relative k-space distribution with low coherence interferometer

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2016-05-01

    The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.

  17. Extracting Communities from Complex Networks by the k-Dense Method

    NASA Astrophysics Data System (ADS)

    Saito, Kazumi; Yamada, Takeshi; Kazama, Kazuhiro

    To understand the structural and functional properties of large-scale complex networks, it is crucial to efficiently extract a set of cohesive subnetworks as communities. There have been proposed several such community extraction methods in the literature, including the classical k-core decomposition method and, more recently, the k-clique based community extraction method. The k-core method, although computationally efficient, is often not powerful enough for uncovering a detailed community structure and it produces only coarse-grained and loosely connected communities. The k-clique method, on the other hand, can extract fine-grained and tightly connected communities but requires a substantial amount of computational load for large-scale complex networks. In this paper, we present a new notion of a subnetwork called k-dense, and propose an efficient algorithm for extracting k-dense communities. We applied our method to the three different types of networks assembled from real data, namely, from blog trackbacks, word associations and Wikipedia references, and demonstrated that the k-dense method could extract communities almost as efficiently as the k-core method, while the qualities of the extracted communities are comparable to those obtained by the k-clique method.

  18. A novel allosteric mechanism in the cysteine peptidase cathepsin K discovered by computational methods

    NASA Astrophysics Data System (ADS)

    Novinec, Marko; Korenč, Matevž; Caflisch, Amedeo; Ranganathan, Rama; Lenarčič, Brigita; Baici, Antonio

    2014-02-01

    Allosteric modifiers have the potential to fine-tune enzyme activity. Therefore, targeting allosteric sites is gaining increasing recognition as a strategy in drug design. Here we report the use of computational methods for the discovery of the first small-molecule allosteric inhibitor of the collagenolytic cysteine peptidase cathepsin K, a major target for the treatment of osteoporosis. The molecule NSC13345 is identified by high-throughput docking of compound libraries to surface sites on the peptidase that are connected to the active site by an evolutionarily conserved network of residues (protein sector). The crystal structure of the complex shows that NSC13345 binds to a novel allosteric site on cathepsin K. The compound acts as a hyperbolic mixed modifier in the presence of a synthetic substrate, it completely inhibits collagen degradation and has good selectivity for cathepsin K over related enzymes. Altogether, these properties qualify our methodology and NSC13345 as promising candidates for allosteric drug design.

  19. Computational methods and software systems for dynamics and control of large space structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.

    1990-01-01

    Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers.

  20. Space Station 20-kHz power management and distribution system

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.; Sundberg, Gale R.

    1986-01-01

    During the conceptual design phase a 20-kHz power distribution system was selected as the reference for the Space Station. The system is single-phase 400 VRMS, with a sinusoidal wave form. The initial user power level will be 75 kW with growth to 300 kW. The high-frequency system selection was based upon considerations of efficiency, weight, safety, ease of control, interface with computers, and ease of paralleling for growth. Each of these aspects will be discussed as well as the associated trade-offs involved. An advanced development program has been instituted to accelerate the maturation of the high-frequency system. Some technical aspects of the advanced development will be discussed.

  1. Space station 20-kHz power management and distribution system

    NASA Technical Reports Server (NTRS)

    Hansen, I. G.; Sundberg, G. R.

    1986-01-01

    During the conceptual design phase a 20-kHz power distribution system was selected as the reference for the space station. The system is single-phase 400 VRMS, with a sinusoidal wave form. The initial user power level will be 75 kW with growth to 300 kW. The high-frequency system selection was based upon considerations of efficiency, weight, safety, ease of control, interface with computers, and ease of paralleling for growth. Each of these aspects will be discussed as well as the associated trade-offs involved. An advanced development program has been instituted to accelerate the maturation of the high-frequency system. Some technical aspects of the advanced development will be discussed.

  2. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    NASA Astrophysics Data System (ADS)

    Zhu, Yan-Chun; Du, Jiang; Yang, Wen-Chao; Duan, Chai-Jie; Wang, Hao-Yu; Gao, Song; Bao, Shang-Lian

    2014-03-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts.

  3. A computational method for detecting copy number variations using scale-space filtering

    PubMed Central

    2013-01-01

    Background As next-generation sequencing technology made rapid and cost-effective sequencing available, the importance of computational approaches in finding and analyzing copy number variations (CNVs) has been amplified. Furthermore, most genome projects need to accurately analyze sequences with fairly low-coverage read data. It is urgently needed to develop a method to detect the exact types and locations of CNVs from low coverage read data. Results Here, we propose a new CNV detection method, CNV_SS, which uses scale-space filtering. The scale-space filtering is evaluated by applying to the read coverage data the Gaussian convolution for various scales according to a given scaling parameter. Next, by differentiating twice and finding zero-crossing points, inflection points of scale-space filtered read coverage data are calculated per scale. Then, the types and the exact locations of CNVs are obtained by analyzing the finger print map, the contours of zero-crossing points for various scales. Conclusions The performance of CNV_SS showed that FNR and FPR stay in the range of 1.27% to 2.43% and 1.14% to 2.44%, respectively, even at a relatively low coverage (0.5x ≤C ≤2x). CNV_SS gave also much more effective results than the conventional methods in the evaluation of FNR, at 3.82% at least and 76.97% at most even when the coverage level of read data is low. CNV_SS source code is freely available from http://dblab.hallym.ac.kr/CNV SS/. PMID:23418726

  4. K-space reconstruction with anisotropic kernel support (KARAOKE) for ultrafast partially parallel imaging

    PubMed Central

    Miao, Jun; Wong, Wilbur C. K.; Narayan, Sreenath; Wilson, David L.

    2011-01-01

    Purpose: Partially parallel imaging (PPI) greatly accelerates MR imaging by using surface coil arrays and under-sampling k-space. However, the reduction factor (R) in PPI is theoretically constrained by the number of coils (NC). A symmetrically shaped kernel is typically used, but this often prevents even the theoretically possible R from being achieved. Here, the authors propose a kernel design method to accelerate PPI faster than R = NC. Methods: K-space data demonstrates an anisotropic pattern that is correlated with the object itself and to the asymmetry of the coil sensitivity profile, which is caused by coil placement and B1 inhomogeneity. From spatial analysis theory, reconstruction of such pattern is best achieved by a signal-dependent anisotropic shape kernel. As a result, the authors propose the use of asymmetric kernels to improve k-space reconstruction. The authors fit a bivariate Gaussian function to the local signal magnitude of each coil, then threshold this function to extract the kernel elements. A perceptual difference model (Case-PDM) was employed to quantitatively evaluate image quality. Results: A MR phantom experiment showed that k-space anisotropy increased as a function of magnetic field strength. The authors tested a K-spAce Reconstruction with AnisOtropic KErnel support (“KARAOKE”) algorithm with both MR phantom and in vivo data sets, and compared the reconstructions to those produced by GRAPPA, a popular PPI reconstruction method. By exploiting k-space anisotropy, KARAOKE was able to better preserve edges, which is particularly useful for cardiac imaging and motion correction, while GRAPPA failed at a high R near or exceeding NC. KARAOKE performed comparably to GRAPPA at low Rs. Conclusions: As a rule of thumb, KARAOKE reconstruction should always be used for higher quality k-space reconstruction, particularly when PPI data is acquired at high Rs and∕or high field strength. PMID:22047378

  5. Image Reconstruction from Highly Undersampled (k, t)-Space Data with Joint Partial Separability and Sparsity Constraints

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Christodoulou, Anthony G.; Liang, Zhi-Pei

    2012-01-01

    Partial separability (PS) and sparsity have been previously used to enable reconstruction of dynamic images from undersampled (k, t)-space data. This paper presents a new method to use PS and sparsity constraints jointly for enhanced performance in this context. The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually. A globally convergent computational algorithm is described to efficiently solve the underlying optimization problem. Reconstruction results from simulated and in vivo cardiac MRI data are also shown to illustrate the performance of the proposed method. PMID:22695345

  6. The computational complexity of elliptic curve integer sub-decomposition (ISD) method

    NASA Astrophysics Data System (ADS)

    Ajeena, Ruma Kareem K.; Kamarulhaili, Hailiza

    2014-07-01

    The idea of the GLV method of Gallant, Lambert and Vanstone (Crypto 2001) is considered a foundation stone to build a new procedure to compute the elliptic curve scalar multiplication. This procedure, that is integer sub-decomposition (ISD), will compute any multiple kP of elliptic curve point P which has a large prime order n with two low-degrees endomorphisms ψ1 and ψ2 of elliptic curve E over prime field Fp. The sub-decomposition of values k1 and k2, not bounded by ±C√n , gives us new integers k11, k12, k21 and k22 which are bounded by ±C√n and can be computed through solving the closest vector problem in lattice. The percentage of a successful computation for the scalar multiplication increases by ISD method, which improved the computational efficiency in comparison with the general method for computing scalar multiplication in elliptic curves over the prime fields. This paper will present the mechanism of ISD method and will shed light mainly on the computation complexity of the ISD approach that will be determined by computing the cost of operations. These operations include elliptic curve operations and finite field operations.

  7. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    DTIC Science & Technology

    2015-09-13

    prescribed fully or partially, is from an actual locust, extracted from high-speed, multi-camera video recordings of the locust in a wind tunnel . We use...With creative methods for coupling the fluid and structure, we can increase the scope and efficiency of the FSI modeling . Multiscale methods, which now...play an important role in computational mathematics, can also increase the accuracy and efficiency of the computer modeling techniques. The main

  8. Computational methods in the exploration of the classical and statistical mechanics of celestial scale strings: Rotating Space Elevators

    NASA Astrophysics Data System (ADS)

    Knudsen, Steven; Golubovic, Leonardo

    2015-04-01

    With the advent of ultra-strong materials, the Space Elevator has changed from science fiction to real science. We discuss computational and theoretical methods we developed to explore classical and statistical mechanics of rotating Space Elevators (RSE). An RSE is a loopy string reaching deep into outer space. The floppy RSE loop executes a motion which is nearly a superposition of two rotations: geosynchronous rotation around the Earth, and yet another faster rotational motion of the string which goes on around a line perpendicular to the Earth at its equator. Strikingly, objects sliding along the RSE loop spontaneously oscillate between two turning points, one of which is close to the Earth (starting point) whereas the other one is deeply in the outer space. The RSE concept thus solves a major problem in space elevator science which is how to supply energy to the climbers moving along space elevator strings. The exploration of the dynamics of a floppy string interacting with objects sliding along it has required development of novel finite element algorithms described in this presentation. We thank Prof. Duncan Lorimer of WVU for kindly providing us access to his computational facility.

  9. Cloud Computing Techniques for Space Mission Design

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  10. K-space reconstruction with anisotropic kernel support (KARAOKE) for ultrafast partially parallel imaging.

    PubMed

    Miao, Jun; Wong, Wilbur C K; Narayan, Sreenath; Wilson, David L

    2011-11-01

    Partially parallel imaging (PPI) greatly accelerates MR imaging by using surface coil arrays and under-sampling k-space. However, the reduction factor (R) in PPI is theoretically constrained by the number of coils (N(C)). A symmetrically shaped kernel is typically used, but this often prevents even the theoretically possible R from being achieved. Here, the authors propose a kernel design method to accelerate PPI faster than R = N(C). K-space data demonstrates an anisotropic pattern that is correlated with the object itself and to the asymmetry of the coil sensitivity profile, which is caused by coil placement and B(1) inhomogeneity. From spatial analysis theory, reconstruction of such pattern is best achieved by a signal-dependent anisotropic shape kernel. As a result, the authors propose the use of asymmetric kernels to improve k-space reconstruction. The authors fit a bivariate Gaussian function to the local signal magnitude of each coil, then threshold this function to extract the kernel elements. A perceptual difference model (Case-PDM) was employed to quantitatively evaluate image quality. A MR phantom experiment showed that k-space anisotropy increased as a function of magnetic field strength. The authors tested a K-spAce Reconstruction with AnisOtropic KErnel support ("KARAOKE") algorithm with both MR phantom and in vivo data sets, and compared the reconstructions to those produced by GRAPPA, a popular PPI reconstruction method. By exploiting k-space anisotropy, KARAOKE was able to better preserve edges, which is particularly useful for cardiac imaging and motion correction, while GRAPPA failed at a high R near or exceeding N(C). KARAOKE performed comparably to GRAPPA at low Rs. As a rule of thumb, KARAOKE reconstruction should always be used for higher quality k-space reconstruction, particularly when PPI data is acquired at high Rs and/or high field strength.

  11. 3-D Electromagnetic field analysis of wireless power transfer system using K computer

    NASA Astrophysics Data System (ADS)

    Kawase, Yoshihiro; Yamaguchi, Tadashi; Murashita, Masaya; Tsukada, Shota; Ota, Tomohiro; Yamamoto, Takeshi

    2018-05-01

    We analyze the electromagnetic field of a wireless power transfer system using the 3-D parallel finite element method on K computer, which is a super computer in Japan. It is clarified that the electromagnetic field of the wireless power transfer system can be analyzed in a practical time using the parallel computation on K computer, moreover, the accuracy of the loss calculation becomes better as the mesh division of the shield becomes fine.

  12. Computation Methods,

    DTIC Science & Technology

    1980-03-01

    TI - 59 Programmable Calculator and is...Compute r(6.3) ENTER PRESS DISPLAY [2nd] [Pgm] 11 r 6.3 [A] 201.8132752 Thus, r(6.3) =201.8132752 14 GAMMA FUNCTION; LISTING FOR TI - 59 PROGRAMMABLE CALCULATOR LOC...2 .¢ 0 2 (xo 2 o )[.2.- ; ’ .€ +Q ( x{o[ . - 0 1_, 02+ 1.0] +y)(k󈧙 t)+ i [_.- . ’ "€ 32 . ..V. I- TI-59 METHOD (ML-09) The TI - 59 Programmable Calculator

  13. Space Station UCS antenna pattern computation and measurement. [UHF Communication Subsystem

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Lu, Ba P.; Johnson, Larry A.; Fournet, Jon S.; Panneton, Robert J.; Ngo, John D.; Eggers, Donald S.; Arndt, G. D.

    1993-01-01

    The purpose of this paper is to analyze the interference to the Space Station Ultrahigh Frequency (UHF) Communication Subsystem (UCS) antenna radiation pattern due to its environment - Space Station. A hybrid Computational Electromagnetics (CEM) technique was applied in this study. The antenna was modeled using the Method of Moments (MOM) and the radiation patterns were computed using the Uniform Geometrical Theory of Diffraction (GTD) in which the effects of the reflected and diffracted fields from surfaces, edges, and vertices of the Space Station structures were included. In order to validate the CEM techniques, and to provide confidence in the computer-generated results, a comparison with experimental measurements was made for a 1/15 scale Space Station mockup. Based on the results accomplished, good agreement on experimental and computed results was obtained. The computed results using the CEM techniques for the Space Station UCS antenna pattern predictions have been validated.

  14. En Route Spacing System and Method

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz (Inventor); Green, Steven M. (Inventor)

    2002-01-01

    A method of and computer software for minimizing aircraft deviations needed to comply with an en route miles-in-trail spacing requirement imposed during air traffic control operations via establishing a spacing reference geometry, predicting spatial locations of a plurality of aircraft at a predicted time of intersection of a path of a first of said plurality of aircraft with the spacing reference geometry, and determining spacing of each of the plurality of aircraft based on the predicted spatial locations.

  15. En route spacing system and method

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz (Inventor); Green, Steven M. (Inventor)

    2002-01-01

    A method of and computer software for minimizing aircraft deviations needed to comply with an en route miles-in-trail spacing requirement imposed during air traffic control operations via establishing a spacing reference geometry, predicting spatial locations of a plurality of aircraft at a predicted time of intersection of a path of a first of said plurality of aircraft with the spacing reference geometry, and determining spacing of each of the plurality of aircraft based on the predicted spatial locations.

  16. Space Mission : Y3K

    NASA Astrophysics Data System (ADS)

    2001-01-01

    ESA and the APME are hosting a contest for 10 - 15 year olds in nine European countries (Austria, Belgium, France, Germany, Italy, the Netherlands, Spain, Sweden and the United Kingdom). The contest is based on an interactive CD ROM, called Space Mission: Y3K, which explores space technology and shows some concrete uses of that technology in enhancing the quality of life on Earth. The CD ROM invites kids to join animated character Space Ranger Pete on an action-packed, colourful journey through space. Space Ranger Pete begins on Earth: the user navigates around a 'locker room' to learn about synthetic materials used in rocket boosters, heat shields, space suits and helmets, and how these materials have now become indispensable to everyday life. From Earth he flies into space and the user follows him from the control room in the spacecraft to a planet, satellites and finally to the International Space Station. Along the way, the user jots down clues that he or she discovers in this exploration, designing an imaginary space community and putting together a submission for the contest. The lucky winners will spend a weekend training as "junior astronauts" at the European Space Centre in Belgium (20-22 April 2001). They will be put through their astronaut paces, learning the art of space walking, running their own space mission, piloting a space capsule and re-entering the Earth's atmosphere. The competition features in various youth media channels across Europe. In the UK, popular BBC Saturday morning TV show, Live & Kicking, will be launching the competition and will invite viewers to submit their space community designs to win a weekend at ESC. In Germany, high circulation children's magazine Geolino will feature the competition in the January issue and on their internet site. And youth magazine ZoZitDat will feature the competition in the Netherlands throughout February. Space Mission: Y3K is part of an on-going partnership between the ESA's Technology Transfer

  17. Computational Physics for Space Flight Applications

    NASA Technical Reports Server (NTRS)

    Reed, Robert A.

    2004-01-01

    This paper presents viewgraphs on computational physics for space flight applications. The topics include: 1) Introduction to space radiation effects in microelectronics; 2) Using applied physics to help NASA meet mission objectives; 3) Example of applied computational physics; and 4) Future directions in applied computational physics.

  18. The use of computer models to predict temperature and smoke movement in high bay spaces

    NASA Technical Reports Server (NTRS)

    Notarianni, Kathy A.; Davis, William D.

    1993-01-01

    The Building and Fire Research Laboratory (BFRL) was given the opportunity to make measurements during fire calibration tests of the heat detection system in an aircraft hangar with a nominal 30.4 (100 ft) ceiling height near Dallas, TX. Fire gas temperatures resulting from an approximately 8250 kW isopropyl alcohol pool fire were measured above the fire and along the ceiling. The results of the experiments were then compared to predictions from the computer fire models DETACT-QS, FPETOOL, and LAVENT. In section A of the analysis conducted, DETACT-QS AND FPETOOL significantly underpredicted the gas temperature. LAVENT at the position below the ceiling corresponding to maximum temperature and velocity provided better agreement with the data. For large spaces, hot gas transport time and an improved fire plume dynamics model should be incorporated into the computer fire model activation routines. A computational fluid dynamics (CFD) model, HARWELL FLOW3D, was then used to model the hot gas movement in the space. Reasonable agreement was found between the temperatures predicted from the CFD calculations and the temperatures measured in the aircraft hangar. In section B, an existing NASA high bay space was modeled using the CFD model. The NASA space was a clean room, 27.4 m (90 ft) high with forced horizontal laminar flow. The purpose of this analysis is to determine how the existing fire detection devices would respond to various size fires in the space. The analysis was conducted for 32 MW, 400 kW, and 40 kW fires.

  19. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    NASA Astrophysics Data System (ADS)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  20. The effects of navigator distortion and noise level on interleaved EPI DWI reconstruction: a comparison between image- and k-space-based method.

    PubMed

    Dai, Erpeng; Zhang, Zhe; Ma, Xiaodong; Dong, Zijing; Li, Xuesong; Xiong, Yuhui; Yuan, Chun; Guo, Hua

    2018-03-23

    To study the effects of 2D navigator distortion and noise level on interleaved EPI (iEPI) DWI reconstruction, using either the image- or k-space-based method. The 2D navigator acquisition was adjusted by reducing its echo spacing in the readout direction and undersampling in the phase encoding direction. A POCS-based reconstruction using image-space sampling function (IRIS) algorithm (POCSIRIS) was developed to reduce the impact of navigator distortion. POCSIRIS was then compared with the original IRIS algorithm and a SPIRiT-based k-space algorithm, under different navigator distortion and noise levels. Reducing the navigator distortion can improve the reconstruction of iEPI DWI. The proposed POCSIRIS and SPIRiT-based algorithms are more tolerable to different navigator distortion levels, compared to the original IRIS algorithm. SPIRiT may be hindered by low SNR of the navigator. Multi-shot iEPI DWI reconstruction can be improved by reducing the 2D navigator distortion. Different reconstruction methods show variable sensitivity to navigator distortion or noise levels. Furthermore, the findings can be valuable in applications such as simultaneous multi-slice accelerated iEPI DWI and multi-slab diffusion imaging. © 2018 International Society for Magnetic Resonance in Medicine.

  1. Development of X-TOOLSS: Preliminary Design of Space Systems Using Evolutionary Computation

    NASA Technical Reports Server (NTRS)

    Schnell, Andrew R.; Hull, Patrick V.; Turner, Mike L.; Dozier, Gerry; Alverson, Lauren; Garrett, Aaron; Reneau, Jarred

    2008-01-01

    Evolutionary computational (EC) techniques such as genetic algorithms (GA) have been identified as promising methods to explore the design space of mechanical and electrical systems at the earliest stages of design. In this paper the authors summarize their research in the use of evolutionary computation to develop preliminary designs for various space systems. An evolutionary computational solver developed over the course of the research, X-TOOLSS (Exploration Toolset for the Optimization of Launch and Space Systems) is discussed. With the success of early, low-fidelity example problems, an outline of work involving more computationally complex models is discussed.

  2. A Practical Computational Method for the Anisotropic Redshift-Space 3-Point Correlation Function

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2018-04-01

    We present an algorithm enabling computation of the anisotropic redshift-space galaxy 3-point correlation function (3PCF) scaling as N2, with N the number of galaxies. Our previous work showed how to compute the isotropic 3PCF with this scaling by expanding the radially-binned density field around each galaxy in the survey into spherical harmonics and combining these coefficients to form multipole moments. The N2 scaling occurred because this approach never explicitly required the relative angle between a galaxy pair about the primary galaxy. Here we generalize this work, demonstrating that in the presence of azimuthally-symmetric anisotropy produced by redshift-space distortions (RSD) the 3PCF can be described by two triangle side lengths, two independent total angular momenta, and a spin. This basis for the anisotropic 3PCF allows its computation with negligible additional work over the isotropic 3PCF. We also present the covariance matrix of the anisotropic 3PCF measured in this basis. Our algorithm tracks the full 5-D redshift-space 3PCF, uses an accurate line of sight to each triplet, is exact in angle, and easily handles edge correction. It will enable use of the anisotropic large-scale 3PCF as a probe of RSD in current and upcoming large-scale redshift surveys.

  3. Bennett's acceptance ratio and histogram analysis methods enhanced by umbrella sampling along a reaction coordinate in configurational space.

    PubMed

    Kim, Ilsoo; Allen, Toby W

    2012-04-28

    Free energy perturbation, a method for computing the free energy difference between two states, is often combined with non-Boltzmann biased sampling techniques in order to accelerate the convergence of free energy calculations. Here we present a new extension of the Bennett acceptance ratio (BAR) method by combining it with umbrella sampling (US) along a reaction coordinate in configurational space. In this approach, which we call Bennett acceptance ratio with umbrella sampling (BAR-US), the conditional histogram of energy difference (a mapping of the 3N-dimensional configurational space via a reaction coordinate onto 1D energy difference space) is weighted for marginalization with the associated population density along a reaction coordinate computed by US. This procedure produces marginal histograms of energy difference, from forward and backward simulations, with higher overlap in energy difference space, rendering free energy difference estimations using BAR statistically more reliable. In addition to BAR-US, two histogram analysis methods, termed Bennett overlapping histograms with US (BOH-US) and Bennett-Hummer (linear) least square with US (BHLS-US), are employed as consistency and convergence checks for free energy difference estimation by BAR-US. The proposed methods (BAR-US, BOH-US, and BHLS-US) are applied to a 1-dimensional asymmetric model potential, as has been used previously to test free energy calculations from non-equilibrium processes. We then consider the more stringent test of a 1-dimensional strongly (but linearly) shifted harmonic oscillator, which exhibits no overlap between two states when sampled using unbiased Brownian dynamics. We find that the efficiency of the proposed methods is enhanced over the original Bennett's methods (BAR, BOH, and BHLS) through fast uniform sampling of energy difference space via US in configurational space. We apply the proposed methods to the calculation of the electrostatic contribution to the absolute

  4. k-Space Image Correlation Spectroscopy: A Method for Accurate Transport Measurements Independent of Fluorophore Photophysics

    PubMed Central

    Kolin, David L.; Ronis, David; Wiseman, Paul W.

    2006-01-01

    We present the theory and application of reciprocal space image correlation spectroscopy (kICS). This technique measures the number density, diffusion coefficient, and velocity of fluorescently labeled macromolecules in a cell membrane imaged on a confocal, two-photon, or total internal reflection fluorescence microscope. In contrast to r-space correlation techniques, we show kICS can recover accurate dynamics even in the presence of complex fluorophore photobleaching and/or “blinking”. Furthermore, these quantities can be calculated without nonlinear curve fitting, or any knowledge of the beam radius of the exciting laser. The number densities calculated by kICS are less sensitive to spatial inhomogeneity of the fluorophore distribution than densities measured using image correlation spectroscopy. We use simulations as a proof-of-principle to show that number densities and transport coefficients can be extracted using this technique. We present calibration measurements with fluorescent microspheres imaged on a confocal microscope, which recover Stokes-Einstein diffusion coefficients, and flow velocities that agree with single particle tracking measurements. We also show the application of kICS to measurements of the transport dynamics of α5-integrin/enhanced green fluorescent protein constructs in a transfected CHO cell imaged on a total internal reflection fluorescence microscope using charge-coupled device area detection. PMID:16861272

  5. Computing Normal Shock-Isotropic Turbulence Interaction With Tetrahedral Meshes and the Space-Time CESE Method

    NASA Astrophysics Data System (ADS)

    Venkatachari, Balaji Shankar; Chang, Chau-Lyan

    2016-11-01

    The focus of this study is scale-resolving simulations of the canonical normal shock- isotropic turbulence interaction using unstructured tetrahedral meshes and the space-time conservation element solution element (CESE) method. Despite decades of development in unstructured mesh methods and its potential benefits of ease of mesh generation around complex geometries and mesh adaptation, direct numerical or large-eddy simulations of turbulent flows are predominantly carried out using structured hexahedral meshes. This is due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for unstructured meshes that can resolve multiple physical scales and flow discontinuities simultaneously. The CESE method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to accurately simulate turbulent flows using tetrahedral meshes. As part of the study, various regimes of the shock-turbulence interaction (wrinkled and broken shock regimes) will be investigated along with a study on how adaptive refinement of tetrahedral meshes benefits this problem. The research funding for this paper has been provided by Revolutionary Computational Aerosciences (RCA) subproject under the NASA Transformative Aeronautics Concepts Program (TACP).

  6. FSH: fast spaced seed hashing exploiting adjacent hashes.

    PubMed

    Girotto, Samuele; Comin, Matteo; Pizzi, Cinzia

    2018-01-01

    Patterns with wildcards in specified positions, namely spaced seeds , are increasingly used instead of k -mers in many bioinformatics applications that require indexing, querying and rapid similarity search, as they can provide better sensitivity. Many of these applications require to compute the hashing of each position in the input sequences with respect to the given spaced seed, or to multiple spaced seeds. While the hashing of k -mers can be rapidly computed by exploiting the large overlap between consecutive k -mers, spaced seeds hashing is usually computed from scratch for each position in the input sequence, thus resulting in slower processing. The method proposed in this paper, fast spaced-seed hashing (FSH), exploits the similarity of the hash values of spaced seeds computed at adjacent positions in the input sequence. In our experiments we compute the hash for each positions of metagenomics reads from several datasets, with respect to different spaced seeds. We also propose a generalized version of the algorithm for the simultaneous computation of multiple spaced seeds hashing. In the experiments, our algorithm can compute the hashing values of spaced seeds with a speedup, with respect to the traditional approach, between 1.6[Formula: see text] to 5.3[Formula: see text], depending on the structure of the spaced seed. Spaced seed hashing is a routine task for several bioinformatics application. FSH allows to perform this task efficiently and raise the question of whether other hashing can be exploited to further improve the speed up. This has the potential of major impact in the field, making spaced seed applications not only accurate, but also faster and more efficient. The software FSH is freely available for academic use at: https://bitbucket.org/samu661/fsh/overview.

  7. Space Radiation Transport Methods Development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.

  8. Multiple summing operators on C(K) spaces

    NASA Astrophysics Data System (ADS)

    Pérez-García, David; Villanueva, Ignacio

    2004-04-01

    In this paper, we characterize, for 1≤ p<∞, the multiple ( p, 1)-summing multilinear operators on the product of C(K) spaces in terms of their representing polymeasures. As consequences, we obtain a new characterization of ( p, 1)-summing linear operators on C(K) in terms of their representing measures and a new multilinear characterization of L ∞ spaces. We also solve a problem stated by M.S. Ramanujan and E. Schock, improve a result of H. P. Rosenthal and S. J. Szarek, and give new results about polymeasures.

  9. A first-order k-space model for elastic wave propagation in heterogeneous media.

    PubMed

    Firouzi, K; Cox, B T; Treeby, B E; Saffari, N

    2012-09-01

    A pseudospectral model of linear elastic wave propagation is described based on the first order stress-velocity equations of elastodynamics. k-space adjustments to the spectral gradient calculations are derived from the dyadic Green's function solution to the second-order elastic wave equation and used to (a) ensure the solution is exact for homogeneous wave propagation for timesteps of arbitrarily large size, and (b) also allows larger time steps without loss of accuracy in heterogeneous media. The formulation in k-space allows the wavefield to be split easily into compressional and shear parts. A perfectly matched layer (PML) absorbing boundary condition was developed to effectively impose a radiation condition on the wavefield. The staggered grid, which is essential for accurate simulations, is described, along with other practical details of the implementation. The model is verified through comparison with exact solutions for canonical examples and further examples are given to show the efficiency of the method for practical problems. The efficiency of the model is by virtue of the reduced point-per-wavelength requirement, the use of the fast Fourier transform (FFT) to calculate the gradients in k space, and larger time steps made possible by the k-space adjustments.

  10. Computer Analysis of Electromagnetic Field Exposure Hazard for Space Station Astronauts during Extravehicular Activity

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Kelley, James S.; Panneton, Robert B.; Arndt, G. Dickey

    1995-01-01

    In order to estimate the RF radiation hazards to astronauts and electronics equipment due to various Space Station transmitters, the electric fields around the various Space Station antennas are computed using the rigorous Computational Electromagnetics (CEM) techniques. The Method of Moments (MoM) was applied to the UHF and S-band low gain antennas. The Aperture Integration (AI) method and the Geometrical Theory of Diffraction (GTD) method were used to compute the electric field intensities for the S- and Ku-band high gain antennas. As a result of this study, The regions in which the electric fields exceed the specified exposure levels for the Extravehicular Mobility Unit (EMU) electronics equipment and Extravehicular Activity (EVA) astronaut are identified for various Space Station transmitters.

  11. SuperComputers for Space Applications

    DTIC Science & Technology

    2005-07-13

    also ADM001791, Potentially Disruptive Technologies and Their Impact in Space Programs Held in Marseille, France on 4-6 July 2005. , The original...Performance Embedded Computing will allow Ambitious Space Science Investigation", Proc. First Symp. on Potentially Disruptive Technologies and Their Impact in Space Programs, 2005. ➦ SOMMAIRE/SUMMARY ➦ Data Processing

  12. Computational Modeling of Space Physiology

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  13. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  14. Aviation and Space Curriculum Guide K-3

    DOT National Transportation Integrated Search

    1992-01-01

    The Alabama Aerospace Curriculum Guide is designed for teachers of grades K-3 who have little or no experience in the area of aviation or space. The purpose of this guide is to provide an array of aviation and space activities which may be used by te...

  15. SU-D-18C-01: A Novel 4D-MRI Technology Based On K-Space Retrospective Sorting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Yin, F; Cai, J

    2014-06-01

    Purpose: Current 4D-MRI techniques lack sufficient temporal/spatial resolution and consistent tumor contrast. To overcome these limitations, this study presents the development and initial evaluation of an entirely new framework of 4D-MRI based on k-space retrospective sorting. Methods: An important challenge of the proposed technique is to determine the number of repeated scans(NR) required to obtain sufficient k-space data for 4D-MRI. To do that, simulations using 29 cancer patients' respiratory profiles were performed to derive the relationship between data acquisition completeness(Cp) and NR, also relationship between NR(Cp=95%) and the following factors: total slice(NS), respiratory phase bin length(Lb), frame rate(fr), resolution(R) andmore » image acquisition starting-phase(P0). To evaluate our technique, a computer simulation study on a 4D digital human phantom (XCAT) were conducted with regular breathing (fr=0.5Hz; R=256×256). A 2D echo planer imaging(EPI) MRI sequence were assumed to acquire raw k-space data, with respiratory signal and acquisition time for each k-space data line recorded simultaneously. K-space data was re-sorted based on respiratory phases. To evaluate 4D-MRI image quality, tumor trajectories were measured and compared with the input signal. Mean relative amplitude difference(D) and cross-correlation coefficient(CC) are calculated. Finally, phase-sharing sliding window technique was applied to investigate the feasibility of generating ultra-fast 4D-MRI. Result: Cp increased with NR(Cp=100*[1-exp(-0.19*NR)], when NS=30, Lb=100%/6). NR(Cp=95%) was inversely-proportional to Lb (r=0.97), but independent of other factors. 4D-MRI on XCAT demonstrated highly accurate motion information (D=0.67%, CC=0.996) with much less artifacts than those on image-based sorting 4D-MRI. Ultra-fast 4D-MRI with an apparent temporal resolution of 10 frames/second was reconstructed using the phase-sharing sliding window technique. Conclusions: A novel 4D

  16. Advanced nodal neutron diffusion method with space-dependent cross sections: ILLICO-VX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajic, H.L.; Ougouag, A.M.

    1987-01-01

    Advanced transverse integrated nodal methods for neutron diffusion developed since the 1970s require that node- or assembly-homogenized cross sections be known. The underlying structural heterogeneity can be accurately accounted for in homogenization procedures by the use of heterogeneity or discontinuity factors. Other (milder) types of heterogeneity, burnup-induced or due to thermal-hydraulic feedback, can be resolved by explicitly accounting for the spatial variations of material properties. This can be done during the nodal computations via nonlinear iterations. The new method has been implemented in the code ILLICO-VX (ILLICO variable cross-section method). Numerous numerical tests were performed. As expected, the convergence ratemore » of ILLICO-VX is lower than that of ILLICO, requiring approx. 30% more outer iterations per k/sub eff/ computation. The methodology has also been implemented as the NOMAD-VX option of the NOMAD, multicycle, multigroup, two- and three-dimensional nodal diffusion depletion code. The burnup-induced heterogeneities (space dependence of cross sections) are calculated during the burnup steps.« less

  17. A 100-kWt NaK-Cooled Space Reactor Concept for an Early-Flight Mission

    NASA Astrophysics Data System (ADS)

    Poston, David I.

    2003-01-01

    A stainless-steel (SS) sodium-potassium (NaK) cooled reactor could potentially be the first step in utilizing fission technology in space. The sum of all system-level experience for liquid-metal-cooled space reactors has been with NaK, including the SNAP-10a, the only reactor ever launched by the US. This paper describes a 100-kWt NaK reactor, the NaK-100, which is designed to be developed with minimal technical risk. In additional to NaK technology heritage, the NaK-100 uses a proven fuel-form (SS/UO2) and is designed for simplified system integration and testing. The pins are placed within a solid SS prism, and the NaK flows in an annulus between the pins and the prism. The nuclear and thermal-hydraulic performance of the NaK-100 is presented, as well as the major differences between the NaK-100 and SNAP-10a.

  18. Simulations of iron K pre-edge X-ray absorption spectra using the restricted active space method.

    PubMed

    Guo, Meiyuan; Sørensen, Lasse Kragh; Delcey, Mickaël G; Pinjari, Rahul V; Lundberg, Marcus

    2016-01-28

    The intensities and relative energies of metal K pre-edge features are sensitive to both geometric and electronic structures. With the possibility to collect high-resolution spectral data it is important to find theoretical methods that include all important spectral effects: ligand-field splitting, multiplet structures, 3d-4p orbital hybridization, and charge-transfer excitations. Here the restricted active space (RAS) method is used for the first time to calculate metal K pre-edge spectra of open-shell systems, and its performance is tested against on six iron complexes: [FeCl6](n-), [FeCl4](n-), and [Fe(CN)6](n-) in ferrous and ferric oxidation states. The method gives good descriptions of the spectral shapes for all six systems. The mean absolute deviation for the relative energies of different peaks is only 0.1 eV. For the two systems that lack centrosymmetry [FeCl4](2-/1-), the ratios between dipole and quadrupole intensity contributions are reproduced with an error of 10%, which leads to good descriptions of the integrated pre-edge intensities. To gain further chemical insight, the origins of the pre-edge features have been analyzed with a chemically intuitive molecular orbital picture that serves as a bridge between the spectra and the electronic structures. The pre-edges contain information about both ligand-field strengths and orbital covalencies, which can be understood by analyzing the RAS wavefunction. The RAS method can thus be used to predict and rationalize the effects of changes in both the oxidation state and ligand environment in a number of hard X-ray studies of small and medium-sized molecular systems.

  19. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  20. Color segmentation in the HSI color space using the K-means algorithm

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue

  1. A sub-space greedy search method for efficient Bayesian Network inference.

    PubMed

    Zhang, Qing; Cao, Yong; Li, Yong; Zhu, Yanming; Sun, Samuel S M; Guo, Dianjing

    2011-09-01

    Bayesian network (BN) has been successfully used to infer the regulatory relationships of genes from microarray dataset. However, one major limitation of BN approach is the computational cost because the calculation time grows more than exponentially with the dimension of the dataset. In this paper, we propose a sub-space greedy search method for efficient Bayesian Network inference. Particularly, this method limits the greedy search space by only selecting gene pairs with higher partial correlation coefficients. Using both synthetic and real data, we demonstrate that the proposed method achieved comparable results with standard greedy search method yet saved ∼50% of the computational time. We believe that sub-space search method can be widely used for efficient BN inference in systems biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. GPU computing of compressible flow problems by a meshless method with space-filling curves

    NASA Astrophysics Data System (ADS)

    Ma, Z. H.; Wang, H.; Pu, S. H.

    2014-04-01

    A graphic processing unit (GPU) implementation of a meshless method for solving compressible flow problems is presented in this paper. Least-square fit is used to discretize the spatial derivatives of Euler equations and an upwind scheme is applied to estimate the flux terms. The compute unified device architecture (CUDA) C programming model is employed to efficiently and flexibly port the meshless solver from CPU to GPU. Considering the data locality of randomly distributed points, space-filling curves are adopted to re-number the points in order to improve the memory performance. Detailed evaluations are firstly carried out to assess the accuracy and conservation property of the underlying numerical method. Then the GPU accelerated flow solver is used to solve external steady flows over aerodynamic configurations. Representative results are validated through extensive comparisons with the experimental, finite volume or other available reference solutions. Performance analysis reveals that the running time cost of simulations is significantly reduced while impressive (more than an order of magnitude) speedups are achieved.

  3. Numerical computation of space shuttle orbiter flow field

    NASA Technical Reports Server (NTRS)

    Tannehill, John C.

    1988-01-01

    A new parabolized Navier-Stokes (PNS) code has been developed to compute the hypersonic, viscous chemically reacting flow fields around 3-D bodies. The flow medium is assumed to be a multicomponent mixture of thermally perfect but calorically imperfect gases. The new PNS code solves the gas dynamic and species conservation equations in a coupled manner using a noniterative, implicit, approximately factored, finite difference algorithm. The space-marching method is made well-posed by special treatment of the streamwise pressure gradient term. The code has been used to compute hypersonic laminar flow of chemically reacting air over cones at angle of attack. The results of the computations are compared with the results of reacting boundary-layer computations and show excellent agreement.

  4. Needle position estimation from sub-sampled k-space data for MRI-guided interventions

    NASA Astrophysics Data System (ADS)

    Schmitt, Sebastian; Choli, Morwan; Overhoff, Heinrich M.

    2015-03-01

    MRI-guided interventions have gained much interest. They profit from intervention synchronous data acquisition and image visualization. Due to long data acquisition durations, ergonomic limitations may occur. For a trueFISP MRI-data acquisition sequence, a time sparing sub-sampling strategy has been developed that is adapted to amagnetic needle detection. A symmetrical and contrast rich susceptibility needle artifact, i.e. an approximately rectangular gray scale profile is assumed. The 1-D-Fourier transformed of a rectangular function is a sinc-function. Its periodicity is exploited by sampling only along a few orthogonal trajectories in k-space. Because a needle moves during intervention, its tip region resembles a rectangle in a time-difference image that is reconstructed from such sub-sampled k-spaces acquired at different time stamps. In different phantom experiments, a needle was pushed forward along a reference trajectory, which was determined from a needle holders geometric parameters. In addition, the trajectory of the needle tip was estimated by the method described above. Only ca. 4 to 5% of the entire k-space data was used for needle tip estimation. The misalignment of needle orientation and needle tip position, i.e. the differences between reference and estimated values, is small and even in its worst case less than 2 mm. The results show that the method is applicable under nearly real conditions. Next steps are addressed to the validation of the method for clinical data.

  5. Aviation & Space Curriculum Guide K-3. Revised.

    ERIC Educational Resources Information Center

    Alabama State Dept. of Education, Montgomery.

    This guide is designed for teachers of grades K-3 who have little or no experience in the area of aviation or space. The purpose of this guide is to provide an array of aviation and space activities which may be used by teachers to enrich locally-designed programs. Units in this book include: (1) History of Aerospace; (2) Kinds and Uses of…

  6. Joint 6D k-q Space Compressed Sensing for Accelerated High Angular Resolution Diffusion MRI.

    PubMed

    Cheng, Jian; Shen, Dinggang; Basser, Peter J; Yap, Pew-Thian

    2015-01-01

    High Angular Resolution Diffusion Imaging (HARDI) avoids the Gaussian. diffusion assumption that is inherent in Diffusion Tensor Imaging (DTI), and is capable of characterizing complex white matter micro-structure with greater precision. However, HARDI methods such as Diffusion Spectrum Imaging (DSI) typically require significantly more signal measurements than DTI, resulting in prohibitively long scanning times. One of the goals in HARDI research is therefore to improve estimation of quantities such as the Ensemble Average Propagator (EAP) and the Orientation Distribution Function (ODF) with a limited number of diffusion-weighted measurements. A popular approach to this problem, Compressed Sensing (CS), affords highly accurate signal reconstruction using significantly fewer (sub-Nyquist) data points than required traditionally. Existing approaches to CS diffusion MRI (CS-dMRI) mainly focus on applying CS in the q-space of diffusion signal measurements and fail to take into consideration information redundancy in the k-space. In this paper, we propose a framework, called 6-Dimensional Compressed Sensing diffusion MRI (6D-CS-dMRI), for reconstruction of the diffusion signal and the EAP from data sub-sampled in both 3D k-space and 3D q-space. To our knowledge, 6D-CS-dMRI is the first work that applies compressed sensing in the full 6D k-q space and reconstructs the diffusion signal in the full continuous q-space and the EAP in continuous displacement space. Experimental results on synthetic and real data demonstrate that, compared with full DSI sampling in k-q space, 6D-CS-dMRI yields excellent diffusion signal and EAP reconstruction with low root-mean-square error (RMSE) using 11 times less samples (3-fold reduction in k-space and 3.7-fold reduction in q-space).

  7. An accurate method for computer-generating tungsten anode x-ray spectra from 30 to 140 kV.

    PubMed

    Boone, J M; Seibert, J A

    1997-11-01

    A tungsten anode spectral model using interpolating polynomials (TASMIP) was used to compute x-ray spectra at 1 keV intervals over the range from 30 kV to 140 kV. The TASMIP is not semi-empirical and uses no physical assumptions regarding x-ray production, but rather interpolates measured constant potential x-ray spectra published by Fewell et al. [Handbook of Computed Tomography X-ray Spectra (U.S. Government Printing Office, Washington, D.C., 1981)]. X-ray output measurements (mR/mAs measured at 1 m) were made on a calibrated constant potential generator in our laboratory from 50 kV to 124 kV, and with 0-5 mm added aluminum filtration. The Fewell spectra were slightly modified (numerically hardened) and normalized based on the attenuation and output characteristics of a constant potential generator and metal-insert x-ray tube in our laboratory. Then, using the modified Fewell spectra of different kVs, the photon fluence phi at each 1 keV energy bin (E) over energies from 10 keV to 140 keV was characterized using polynomial functions of the form phi (E) = a0[E] + a1[E] kV + a2[E] kV2 + ... + a(n)[E] kVn. A total of 131 polynomial functions were used to calculate accurate x-ray spectra, each function requiring between two and four terms. The resulting TASMIP algorithm produced x-ray spectra that match both the quality and quantity characteristics of the x-ray system in our laboratory. For photon fluences above 10% of the peak fluence in the spectrum, the average percent difference (and standard deviation) between the modified Fewell spectra and the TASMIP photon fluence was -1.43% (3.8%) for the 50 kV spectrum, -0.89% (1.37%) for the 70 kV spectrum, and for the 80, 90, 100, 110, 120, 130 and 140 kV spectra, the mean differences between spectra were all less than 0.20% and the standard deviations were less than approximately 1.1%. The model was also extended to include the effects of generator-induced kV ripple. Finally, the x-ray photon fluence in the units of

  8. COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES

    PubMed Central

    Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus

    2017-01-01

    The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875

  9. A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium

    NASA Astrophysics Data System (ADS)

    Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand

    2014-05-01

    The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.

  10. Respiratory motion resolved, self-gated 4D-MRI using Rotating Cartesian K-space (ROCK)

    PubMed Central

    Han, Fei; Zhou, Ziwu; Cao, Minsong; Yang, Yingli; Sheng, Ke; Hu, Peng

    2017-01-01

    Purpose To propose and validate a respiratory motion resolved, self-gated (SG) 4D-MRI technique to assess patient-specific breathing motion of abdominal organs for radiation treatment planning. Methods The proposed 4D-MRI technique was based on the balanced steady-state free-precession (bSSFP) technique and 3D k-space encoding. A novel ROtating Cartesian K-space (ROCK) reordering method was designed that incorporates repeatedly sampled k-space centerline as the SG motion surrogate and allows for retrospective k-space data binning into different respiratory positions based on the amplitude of the surrogate. The multiple respiratory-resolved 3D k-space data were subsequently reconstructed using a joint parallel imaging and compressed sensing method with spatial and temporal regularization. The proposed 4D-MRI technique was validated using a custom-made dynamic motion phantom and was tested in 6 healthy volunteers, in whom quantitative diaphragm and kidney motion measurements based on 4D-MRI images were compared with those based on 2D-CINE images. Results The 5-minute 4D-MRI scan offers high-quality volumetric images in 1.2×1.2×1.6mm3 and 8 respiratory positions, with good soft-tissue contrast. In phantom experiments with triangular motion waveform, the motion amplitude measurements based on 4D-MRI were 11.89% smaller than the ground truth, whereas a −12.5% difference was expected due to data binning effects. In healthy volunteers, the difference between the measurements based on 4D-MRI and the ones based on 2D-CINE were 6.2±4.5% for the diaphragm, 8.2±4.9% and 8.9±5.1% for the right and left kidney. Conclusion The proposed 4D-MRI technique could provide high resolution, high quality, respiratory motion resolved 4D images with good soft-tissue contrast and are free of the “stitching” artifacts usually seen on 4D-CT and 4D-MRI based on resorting 2D-CINE. It could be used to visualize and quantify abdominal organ motion for MRI-based radiation treatment

  11. Leveraging EAP-Sparsity for Compressed Sensing of MS-HARDI in (k, q)-Space.

    PubMed

    Sun, Jiaqi; Sakhaee, Elham; Entezari, Alireza; Vemuri, Baba C

    2015-01-01

    Compressed Sensing (CS) for the acceleration of MR scans has been widely investigated in the past decade. Lately, considerable progress has been made in achieving similar speed ups in acquiring multi-shell high angular resolution diffusion imaging (MS-HARDI) scans. Existing approaches in this context were primarily concerned with sparse reconstruction of the diffusion MR signal S(q) in the q-space. More recently, methods have been developed to apply the compressed sensing framework to the 6-dimensional joint (k, q)-space, thereby exploiting the redundancy in this 6D space. To guarantee accurate reconstruction from partial MS-HARDI data, the key ingredients of compressed sensing that need to be brought together are: (1) the function to be reconstructed needs to have a sparse representation, and (2) the data for reconstruction ought to be acquired in the dual domain (i.e., incoherent sensing) and (3) the reconstruction process involves a (convex) optimization. In this paper, we present a novel approach that uses partial Fourier sensing in the 6D space of (k, q) for the reconstruction of P(x, r). The distinct feature of our approach is a sparsity model that leverages surfacelets in conjunction with total variation for the joint sparse representation of P(x, r). Thus, our method stands to benefit from the practical guarantees for accurate reconstruction from partial (k, q)-space data. Further, we demonstrate significant savings in acquisition time over diffusion spectral imaging (DSI) which is commonly used as the benchmark for comparisons in reported literature. To demonstrate the benefits of this approach,.we present several synthetic and real data examples.

  12. Grassmann phase space methods for fermions. I. Mode theory

    NASA Astrophysics Data System (ADS)

    Dalton, B. J.; Jeffers, J.; Barnett, S. M.

    2016-07-01

    quantities. Averages of products of Grassmann stochastic variables at the initial time are also involved, but these are determined from the initial conditions for the quantum state. The detailed approach to the numerics is outlined, showing that (apart from standard issues in such numerics) numerical calculations for Grassmann phase space theories of fermion systems could be carried out without needing to represent Grassmann phase space variables on the computer, and only involving processes using c-numbers. We compare our approach to that of Plimak, Collett and Olsen and show that the two approaches differ. As a simple test case we apply the B distribution theory and solve the Ito stochastic equations to demonstrate coupling between degenerate Cooper pairs in a four mode fermionic system involving spin conserving interactions between the spin 1 / 2 fermions, where modes with momenta - k , + k-each associated with spin up, spin down states, are involved.

  13. GAP Noise Computation By The CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2001-01-01

    A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

  14. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  15. Does Prop-2-ynylideneamine, HC≡CCH=NH, Exist in Space? A Theoretical and Computational Investigation

    PubMed Central

    Osman, Osman I.; Elroby, Shaaban A.; Aziz, Saadullah G.; Hilal, Rifaat H.

    2014-01-01

    MP2, DFT and CCSD methods with 6-311++G** and aug-cc-pvdz basis sets have been used to probe the structural changes and relative energies of E-prop-2-ynylideneamine (I), Z-prop-2-ynylideneamine (II), prop-1,2-diene-1-imine (III) and vinyl cyanide (IV). The energy near-equivalence and provenance of preference of isomers and tautomers were investigated by NBO calculations using HF and B3LYP methods with 6-311++G** and aug-cc-pvdz basis sets. All substrates have Cs symmetry. The optimized geometries were found to be mainly theoretical method dependent. All elected levels of theory have computed I/II total energy of isomerization (ΔE) of 1.707 to 3.707 kJ/mol in favour of II at 298.15 K. MP2 and CCSD methods have indicated clearly the preference of II over III; while the B3LYP functional predicted nearly similar total energies. All tested levels of theory yielded a global II/IV tautomerization total energy (ΔE) of 137.3–148.4 kJ/mol in support of IV at 298.15 K. The negative values of ΔS indicated that IV is favoured at low temperature. At high temperature, a reverse tautomerization becomes spontaneous and II is preferred. The existence of II in space was debated through the interpretation and analysis of the thermodynamic and kinetic studies of this tautomerization reaction and the presence of similar compounds in the Interstellar Medium (ISM). PMID:24950178

  16. Computer-assisted uncertainty assessment of k0-NAA measurement results

    NASA Astrophysics Data System (ADS)

    Bučar, T.; Smodiš, B.

    2008-10-01

    In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.

  17. Evaluation of partial k-space strategies to speed up time-domain EPR imaging.

    PubMed

    Subramanian, Sankaran; Chandramouli, Gadisetti V R; McMillan, Alan; Gullapalli, Rao P; Devasahayam, Nallathamby; Mitchell, James B; Matsumoto, Shingo; Krishna, Murali C

    2013-09-01

    Narrow-line spin probes derived from the trityl radical have led to the development of fast in vivo time-domain EPR imaging. Pure phase-encoding imaging modalities based on the single-point imaging scheme have demonstrated the feasibility of three-dimensional oximetric images with functional information in minutes. In this article, we explore techniques to improve the temporal resolution and circumvent the relatively short biological half-lives of trityl probes using partial k-space strategies. There are two main approaches: one involves the use of the Hermitian character of the k-space by which only part of the k-space is measured and the unmeasured part is generated using the Hermitian symmetry. This approach is limited in success by the accuracy of numerical estimate of the phase roll in the k-space that corrupts the Hermiticy. The other approach is to measure only a judicially chosen reduced region of k-space (a centrosymmetric ellipsoid region) that more or less accounts for >70% of the k-space energy. Both of these aspects were explored in Fourier transform-EPR imaging with a doubling of scan speed demonstrated by considering ellipsoid geometry of the k-space. Partial k-space strategies help improve the temporal resolution in studying fast dynamics of functional aspects in vivo with infused spin probes. Copyright © 2012 Wiley Periodicals, Inc.

  18. Evaluation of Partial k-space strategies to speed up Time-domain EPR Imaging

    PubMed Central

    Subramanian, Sankaran; Chandramouli, Gadisetti VR; McMillan, Alan; Gullapalli, Rao P; Devasahayam, Nallathamby; Mitchell, James B.; Matsumoto, Shingo; Krishna, Murali C

    2012-01-01

    Narrow-line spin probes derived from the trityl radical have led to the development of fast in vivo time-domain EPR imaging. Pure phase-encoding imaging modalities based on the Single Point Imaging scheme (SPI) have demonstrated the feasibility of 3D oximetric images with functional information in minutes. In this paper, we explore techniques to improve the temporal resolution and circumvent the relatively short biological half-lives of trityl probes using partial k-space strategies. There are two main approaches: one involves the use of the Hermitian character of the k-space by which only part of the k-space is measured and the unmeasured part is generated using the Hermitian symmetry. This approach is limited in success by the accuracy of numerical estimate of the phase roll in the k-space that corrupts the Hermiticy. The other approach is to measure only a judicially chosen reduced region of k-space (a centrosymmetric ellipsoid region) that more or less accounts for >70% of the k-space energy. Both of these aspects were explored in FT-EPR imaging with a doubling of scan speed demonstrated by considering ellipsoid geometry of the k-space. Partial k-space strategies help improve the temporal resolution in studying fast dynamics of functional aspects in vivo with infused spin probes. PMID:23045171

  19. A space radiation transport method development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2004-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.

  20. A 1050 K Stirling space engine design

    NASA Technical Reports Server (NTRS)

    Penswick, L. Barry

    1988-01-01

    As part of the NASA CSTI High Capacity Power Program on Conversion Systems for Nuclear Applications, Sunpower, Inc. completed for NASA Lewis a reference design of a single-cylinder free-piston Stirling engine that is optimized for the lifetimes and temperatures appropriate for space applications. The NASA effort is part of the overall SP-100 program which is a combined DOD/DOE/NASA project to develop nuclear power for space. Stirling engines have been identified as a growth option for SP-100 offering increased power output and lower system mass and radiator area. Superalloy materials are used in the 1050 K hot end of the engine; the engine temperature ratio is 2.0. The engine design features simplified heat exchangers with heat input by sodium heat pipes, hydrodynamic gas bearings, a permanent magnet linear alternator, and a dynamic balance system. The design shows an efficiency (including the alternator) of 29 percent and a specific mass of 5.7 kg/kW. This design also represents a significant step toward the 1300 K refractory Stirling engine which is another growth option of SP-100.

  1. Computer aided system engineering for space construction

    NASA Technical Reports Server (NTRS)

    Racheli, Ugo

    1989-01-01

    This viewgraph presentation covers the following topics. Construction activities envisioned for the assembly of large platforms in space (as well as interplanetary spacecraft and bases on extraterrestrial surfaces) require computational tools that exceed the capability of conventional construction management programs. The Center for Space Construction is investigating the requirements for new computational tools and, at the same time, suggesting the expansion of graduate and undergraduate curricula to include proficiency in Computer Aided Engineering (CAE) though design courses and individual or team projects in advanced space systems design. In the center's research, special emphasis is placed on problems of constructability and of the interruptability of planned activity sequences to be carried out by crews operating under hostile environmental conditions. The departure point for the planned work is the acquisition of the MCAE I-DEAS software, developed by the Structural Dynamics Research Corporation (SDRC), and its expansion to the level of capability denoted by the acronym IDEAS**2 currently used for configuration maintenance on Space Station Freedom. In addition to improving proficiency in the use of I-DEAS and IDEAS**2, it is contemplated that new software modules will be developed to expand the architecture of IDEAS**2. Such modules will deal with those analyses that require the integration of a space platform's configuration with a breakdown of planned construction activities and with a failure modes analysis to support computer aided system engineering (CASE) applied to space construction.

  2. CSP: A Multifaceted Hybrid Architecture for Space Computing

    NASA Technical Reports Server (NTRS)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  3. Experimental Studies of NaK in a Simulated Space Environment

    NASA Technical Reports Server (NTRS)

    Gibons, Marc; Sanzi, James; Ljubanovic, Damir

    2011-01-01

    Space fission power systems are being developed at the National Aeronautics and Space Administration (NASA) and Department of Energy (DOE) with a short term goal of building a full scale, non-nuclear, Technology Demonstration Unit (TDU) test at NASA's Glenn Research Center. Due to the geometric constraints, mass restrictions, and fairly high temperatures associated with space reactors, liquid metals are typically used as the primary coolant. A eutectic mixture of sodium (22 percent) and potassium (78 percent), or NaK, has been chosen as the coolant for the TDU with a total system capacity of approximately 55 L. NaK, like all alkali metals, is very reactive, and warrants certain safety considerations. To adequately examine the risk associated with the personnel, facility, and test hardware during a potential NaK leak in the large scale TDU test, a small scale experiment was performed in which NaK was released in a thermal vacuum chamber under controlled conditions. The study focused on detecting NaK leaks in the vacuum environment as well as the molecular flow of the NaK vapor. This paper reflects the work completed during the NaK experiment and provides results and discussion relative to the findings.

  4. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  5. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE PAGES

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...

    2016-05-03

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  6. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  7. Virtual k -Space Modulation Optical Microscopy

    NASA Astrophysics Data System (ADS)

    Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Zheng, Guoan; Fang, Yue; Xu, Yingke; Liu, Xu; So, Peter T. C.

    2016-07-01

    We report a novel superresolution microscopy approach for imaging fluorescence samples. The reported approach, termed virtual k -space modulation optical microscopy (VIKMOM), is able to improve the lateral resolution by a factor of 2, reduce the background level, improve the optical sectioning effect and correct for unknown optical aberrations. In the acquisition process of VIKMOM, we used a scanning confocal microscope setup with a 2D detector array to capture sample information at each scanned x -y position. In the recovery process of VIKMOM, we first modulated the captured data by virtual k -space coding and then employed a ptychography-inspired procedure to recover the sample information and correct for unknown optical aberrations. We demonstrated the performance of the reported approach by imaging fluorescent beads, fixed bovine pulmonary artery endothelial (BPAE) cells, and living human astrocytes (HA). As the VIKMOM approach is fully compatible with conventional confocal microscope setups, it may provide a turn-key solution for imaging biological samples with ˜100 nm lateral resolution, in two or three dimensions, with improved optical sectioning capabilities and aberration correcting.

  8. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian

  9. A Method for Measuring Collection Expansion Rates and Shelf Space Capacities.

    ERIC Educational Resources Information Center

    Sapp, Gregg; Suttle, George

    1994-01-01

    Describes an effort to quantify annual collection expansion and shelf space capacities with a computer spreadsheet program. Methods used to quantify the space taken at the beginning of the project; to estimate annual rate of collection growth; and to plot stack space and usage, volume equivalents and usage, and growth capacity are covered.…

  10. Why advanced computing? The key to space-based operations

    NASA Astrophysics Data System (ADS)

    Phister, Paul W., Jr.; Plonisch, Igor; Mineo, Jack

    2000-11-01

    The 'what is the requirement?' aspect of advanced computing and how it relates to and supports Air Force space-based operations is a key issue. In support of the Air Force Space Command's five major mission areas (space control, force enhancement, force applications, space support and mission support), two-fifths of the requirements have associated stringent computing/size implications. The Air Force Research Laboratory's 'migration to space' concept will eventually shift Science and Technology (S&T) dollars from predominantly airborne systems to airborne-and-space related S&T areas. One challenging 'space' area is in the development of sophisticated on-board computing processes for the next generation smaller, cheaper satellite systems. These new space systems (called microsats or nanosats) could be as small as a softball, yet perform functions that are currently being done by large, vulnerable ground-based assets. The Joint Battlespace Infosphere (JBI) concept will be used to manage the overall process of space applications coupled with advancements in computing. The JBI can be defined as a globally interoperable information 'space' which aggregates, integrates, fuses, and intelligently disseminates all relevant battlespace knowledge to support effective decision-making at all echelons of a Joint Task Force (JTF). This paper explores a single theme -- on-board processing is the best avenue to take advantage of advancements in high-performance computing, high-density memories, communications, and re-programmable architecture technologies. The goal is to break away from 'no changes after launch' design to a more flexible design environment that can take advantage of changing space requirements and needs while the space vehicle is 'on orbit.'

  11. Rapid high performance liquid chromatography method development with high prediction accuracy, using 5cm long narrow bore columns packed with sub-2microm particles and Design Space computer modeling.

    PubMed

    Fekete, Szabolcs; Fekete, Jeno; Molnár, Imre; Ganzler, Katalin

    2009-11-06

    Many different strategies of reversed phase high performance liquid chromatographic (RP-HPLC) method development are used today. This paper describes a strategy for the systematic development of ultrahigh-pressure liquid chromatographic (UHPLC or UPLC) methods using 5cmx2.1mm columns packed with sub-2microm particles and computer simulation (DryLab((R)) package). Data for the accuracy of computer modeling in the Design Space under ultrahigh-pressure conditions are reported. An acceptable accuracy for these predictions of the computer models is presented. This work illustrates a method development strategy, focusing on time reduction up to a factor 3-5, compared to the conventional HPLC method development and exhibits parts of the Design Space elaboration as requested by the FDA and ICH Q8R1. Furthermore this paper demonstrates the accuracy of retention time prediction at elevated pressure (enhanced flow-rate) and shows that the computer-assisted simulation can be applied with sufficient precision for UHPLC applications (p>400bar). Examples of fast and effective method development in pharmaceutical analysis, both for gradient and isocratic separations are presented.

  12. KINETIC-J: A computational kernel for solving the linearized Vlasov equation applied to calculations of the kinetic, configuration space plasma current for time harmonic wave electric fields

    NASA Astrophysics Data System (ADS)

    Green, David L.; Berry, Lee A.; Simpson, Adam B.; Younkin, Timothy R.

    2018-04-01

    We present the KINETIC-J code, a computational kernel for evaluating the linearized Vlasov equation with application to calculating the kinetic plasma response (current) to an applied time harmonic wave electric field. This code addresses the need for a configuration space evaluation of the plasma current to enable kinetic full-wave solvers for waves in hot plasmas to move beyond the limitations of the traditional Fourier spectral methods. We benchmark the kernel via comparison with the standard k →-space forms of the hot plasma conductivity tensor.

  13. Respiratory motion-resolved, self-gated 4D-MRI using rotating cartesian k-space (ROCK).

    PubMed

    Han, Fei; Zhou, Ziwu; Cao, Minsong; Yang, Yingli; Sheng, Ke; Hu, Peng

    2017-04-01

    To propose and validate a respiratory motion resolved, self-gated (SG) 4D-MRI technique to assess patient-specific breathing motion of abdominal organs for radiation treatment planning. The proposed 4D-MRI technique was based on the balanced steady-state free-precession (bSSFP) technique and 3D k-space encoding. A novel rotating cartesian k-space (ROCK) reordering method was designed which incorporates repeatedly sampled k-space centerline as the SG motion surrogate and allows for retrospective k-space data binning into different respiratory positions based on the amplitude of the surrogate. The multiple respiratory-resolved 3D k-space data were subsequently reconstructed using a joint parallel imaging and compressed sensing method with spatial and temporal regularization. The proposed 4D-MRI technique was validated using a custom-made dynamic motion phantom and was tested in six healthy volunteers, in whom quantitative diaphragm and kidney motion measurements based on 4D-MRI images were compared with those based on 2D-CINE images. The 5-minute 4D-MRI scan offers high-quality volumetric images in 1.2 × 1.2 × 1.6 mm 3 and eight respiratory positions, with good soft-tissue contrast. In phantom experiments with triangular motion waveform, the motion amplitude measurements based on 4D-MRI were 11.89% smaller than the ground truth, whereas a -12.5% difference was expected due to data binning effects. In healthy volunteers, the difference between the measurements based on 4D-MRI and the ones based on 2D-CINE were 6.2 ± 4.5% for the diaphragm, 8.2 ± 4.9% and 8.9 ± 5.1% for the right and left kidney. The proposed 4D-MRI technique could provide high-resolution, high-quality, respiratory motion-resolved 4D images with good soft-tissue contrast and are free of the "stitching" artifacts usually seen on 4D-CT and 4D-MRI based on resorting 2D-CINE. It could be used to visualize and quantify abdominal organ motion for MRI-based radiation treatment planning. © 2017 American

  14. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  15. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.

    2011-07-01

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software

  16. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software

  17. Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow.

  18. Time domain simulation of harmonic ultrasound images and beam patterns in 3D using the k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Tumen, Mustafa; Cox, B T

    2011-01-01

    A k-space pseudospectral model is developed for the fast full-wave simulation of nonlinear ultrasound propagation through heterogeneous media. The model uses a novel equation of state to account for nonlinearity in addition to power law absorption. The spectral calculation of the spatial gradients enables a significant reduction in the number of required grid nodes compared to finite difference methods. The model is parallelized using a graphical processing unit (GPU) which allows the simulation of individual ultrasound scan lines using a 256 x 256 x 128 voxel grid in less than five minutes. Several numerical examples are given, including the simulation of harmonic ultrasound images and beam patterns using a linear phased array transducer.

  19. Extending the Computer Revolution into Space

    NASA Technical Reports Server (NTRS)

    Deutsch, Leslie J.

    1999-01-01

    The computer revolution is far from over on Earth. It is just beginning in space. We can look forward to an era of enhanced scientific exploration of the solar system and even other start systems. We can look forward to the benefits of this space revolution to commercial uses on and around Earth.

  20. Evolution of a standard microprocessor-based space computer

    NASA Technical Reports Server (NTRS)

    Fernandez, M.

    1980-01-01

    An existing in inventory computer hardware/software package (B-1 RFS/ECM) was repackaged and applied to multiple missile/space programs. Concurrent with the application efforts, low risk modifications were made to the computer from program to program to take advantage of newer, advanced technology and to meet increasingly more demanding requirements (computational and memory capabilities, longer life, and fault tolerant autonomy). It is concluded that microprocessors hold promise in a number of critical areas for future space computer applications. However, the benefits of the DoD VHSIC Program are required and the old proliferation problem must be revised.

  1. White blood cell segmentation by color-space-based k-means clustering.

    PubMed

    Zhang, Congcong; Xiao, Xiaoyan; Li, Xiaomei; Chen, Ying-Jie; Zhen, Wu; Chang, Jun; Zheng, Chengyun; Liu, Zhi

    2014-09-01

    White blood cell (WBC) segmentation, which is important for cytometry, is a challenging issue because of the morphological diversity of WBCs and the complex and uncertain background of blood smear images. This paper proposes a novel method for the nucleus and cytoplasm segmentation of WBCs for cytometry. A color adjustment step was also introduced before segmentation. Color space decomposition and k-means clustering were combined for segmentation. A database including 300 microscopic blood smear images were used to evaluate the performance of our method. The proposed segmentation method achieves 95.7% and 91.3% overall accuracy for nucleus segmentation and cytoplasm segmentation, respectively. Experimental results demonstrate that the proposed method can segment WBCs effectively with high accuracy.

  2. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  3. Space-filling designs for computer experiments: A review

    DOE PAGES

    Joseph, V. Roshan

    2016-01-29

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  4. Space-filling designs for computer experiments: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  5. The existence results and Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces

    NASA Astrophysics Data System (ADS)

    Wang, Min

    2017-06-01

    This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.

  6. NASA's 3D Flight Computer for Space Applications

    NASA Technical Reports Server (NTRS)

    Alkalai, Leon

    2000-01-01

    The New Millennium Program (NMP) Integrated Product Development Team (IPDT) for Microelectronics Systems was planning to validate a newly developed 3D Flight Computer system on its first deep-space flight, DS1, launched in October 1998. This computer, developed in the 1995-97 time frame, contains many new computer technologies previously never used in deep-space systems. They include: advanced 3D packaging architecture for future low-mass and low-volume avionics systems; high-density 3D packaged chip-stacks for both volatile and non-volatile mass memory: 400 Mbytes of local DRAM memory, and 128 Mbytes of Flash memory; high-bandwidth Peripheral Component Interface (Per) local-bus with a bridge to VME; high-bandwidth (20 Mbps) fiber-optic serial bus; and other attributes, such as standard support for Design for Testability (DFT). Even though this computer system did not complete on time for delivery to the DS1 project, it was an important development along a technology roadmap towards highly integrated and highly miniaturized avionics systems for deep-space applications. This continued technology development is now being performed by NASA's Deep Space System Development Program (also known as X2000) and within JPL's Center for Integrated Space Microsystems (CISM).

  7. Explorations in Space and Time: Computer-Generated Astronomy Films

    ERIC Educational Resources Information Center

    Meeks, M. L.

    1973-01-01

    Discusses the use of the computer animation technique to travel through space and time and watch models of astronomical systems in motion. Included is a list of eight computer-generated demonstration films entitled Explorations in Space and Time.'' (CC)

  8. High-kVp Assisted Metal Artifact Reduction for X-ray Computed Tomography

    PubMed Central

    Xi, Yan; Jin, Yannan; De Man, Bruno; Wang, Ge

    2016-01-01

    In X-ray computed tomography (CT), the presence of metallic parts in patients causes serious artifacts and degrades image quality. Many algorithms were published for metal artifact reduction (MAR) over the past decades with various degrees of success but without a perfect solution. Some MAR algorithms are based on the assumption that metal artifacts are due only to strong beam hardening and may fail in the case of serious photon starvation. Iterative methods handle photon starvation by discarding or underweighting corrupted data, but the results are not always stable and they come with high computational cost. In this paper, we propose a high-kVp-assisted CT scan mode combining a standard CT scan with a few projection views at a high-kVp value to obtain critical projection information near the metal parts. This method only requires minor hardware modifications on a modern CT scanner. Two MAR algorithms are proposed: dual-energy normalized MAR (DNMAR) and high-energy embedded MAR (HEMAR), aiming at situations without and with photon starvation respectively. Simulation results obtained with the CT simulator CatSim demonstrate that the proposed DNMAR and HEMAR methods can eliminate metal artifacts effectively. PMID:27891293

  9. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  10. Consequences of using nonlinear particle trajectories to compute spatial diffusion coefficients. [for charged particles in interplanetary space

    NASA Technical Reports Server (NTRS)

    Goldstein, M. L.

    1976-01-01

    The propagation of charged particles through interstellar and interplanetary space has often been described as a random process in which the particles are scattered by ambient electromagnetic turbulence. In general, this changes both the magnitude and direction of the particles' momentum. Some situations for which scattering in direction (pitch angle) is of primary interest were studied. A perturbed orbit, resonant scattering theory for pitch-angle diffusion in magnetostatic turbulence was slightly generalized and then utilized to compute the diffusion coefficient for spatial propagation parallel to the mean magnetic field, Kappa. All divergences inherent in the quasilinear formalism when the power spectrum of the fluctuation field falls off as K to the minus Q power (Q less than 2) were removed. Various methods of computing Kappa were compared and limits on the validity of the theory discussed. For Q less than 1 or 2, the various methods give roughly comparable values of Kappa, but use of perturbed orbits systematically results in a somewhat smaller Kappa than can be obtained from quasilinear theory.

  11. Exhaustive search system and method using space-filling curves

    DOEpatents

    Spires, Shannon V.

    2003-10-21

    A search system and method for one agent or for multiple agents using a space-filling curve provides a way to control one or more agents to cover an area of any space of any dimensionality using an exhaustive search pattern. An example of the space-filling curve is a Hilbert curve. The search area can be a physical geography, a cyberspace search area, or an area searchable by computing resources. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace.

  12. Replication of Space-Shuttle Computers in FPGAs and ASICs

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.

    2008-01-01

    A document discusses the replication of the functionality of the onboard space-shuttle general-purpose computers (GPCs) in field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The purpose of the replication effort is to enable utilization of proven space-shuttle flight software and software-development facilities to the extent possible during development of software for flight computers for a new generation of launch vehicles derived from the space shuttles. The replication involves specifying the instruction set of the central processing unit and the input/output processor (IOP) of the space-shuttle GPC in a hardware description language (HDL). The HDL is synthesized to form a "core" processor in an FPGA or, less preferably, in an ASIC. The core processor can be used to create a flight-control card to be inserted into a new avionics computer. The IOP of the GPC as implemented in the core processor could be designed to support data-bus protocols other than that of a multiplexer interface adapter (MIA) used in the space shuttle. Hence, a computer containing the core processor could be tailored to communicate via the space-shuttle GPC bus and/or one or more other buses.

  13. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging.

    PubMed

    Nana, Roger; Hu, Xiaoping

    2010-01-01

    k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.

  14. Empowering K-12 Students with Disabilities to Learn Computational Thinking and Computer Programming

    ERIC Educational Resources Information Center

    Israel, Maya; Wherfel, Quentin M.; Pearson, Jamie; Shehab, Saadeddine; Tapia, Tanya

    2015-01-01

    This article's focus is on including computing and computational thinking in K-12 instruction within science, technology, engineering, and mathematics (STEM) education, and to provide that instruction in ways that promote access for students traditionally underrepresented in the STEM fields, such as students with disabilities. Providing computing…

  15. A new computational method for reacting hypersonic flows

    NASA Astrophysics Data System (ADS)

    Niculescu, M. L.; Cojocaru, M. G.; Pricop, M. V.; Fadgyas, M. C.; Pepelea, D.; Stoican, M. G.

    2017-07-01

    Hypersonic gas dynamics computations are challenging due to the difficulties to have reliable and robust chemistry models that are usually added to Navier-Stokes equations. From the numerical point of view, it is very difficult to integrate together Navier-Stokes equations and chemistry model equations because these partial differential equations have different specific time scales. For these reasons, almost all known finite volume methods fail shortly to solve this second order partial differential system. Unfortunately, the heating of Earth reentry vehicles such as space shuttles and capsules is very close linked to endothermic chemical reactions. A better prediction of wall heat flux leads to smaller safety coefficient for thermal shield of space reentry vehicle; therefore, the size of thermal shield decreases and the payload increases. For these reasons, the present paper proposes a new computational method based on chemical equilibrium, which gives accurate prediction of hypersonic heating in order to support the Earth reentry capsule design.

  16. SU-F-J-158: Respiratory Motion Resolved, Self-Gated 4D-MRI Using Rotating Cartesian K-Space Sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, F; Zhou, Z; Yang, Y

    Purpose: Dynamic MRI has been used to quantify respiratory motion of abdominal organs in radiation treatment planning. Many existing 4D-MRI methods based on 2D acquisitions suffer from limited slice resolution and additional stitching artifacts when evaluated in 3D{sup 1}. To address these issues, we developed a 4D-MRI (3D dynamic) technique with true 3D k-space encoding and respiratory motion self-gating. Methods: The 3D k-space was acquired using a Rotating Cartesian K-space (ROCK) pattern, where the Cartesian grid was reordered in a quasi-spiral fashion with each spiral arm rotated using golden angle{sup 2}. Each quasi-spiral arm started with the k-space center-line, whichmore » were used as self-gating{sup 3} signal for respiratory motion estimation. The acquired k-space data was then binned into 8 respiratory phases and the golden angle ensures a near-uniform k-space sampling in each phase. Finally, dynamic 3D images were reconstructed using the ESPIRiT technique{sup 4}. 4D-MRI was performed on 6 healthy volunteers, using the following parameters (bSSFP, Fat-Sat, TE/TR=2ms/4ms, matrix size=500×350×120, resolution=1×1×1.2mm, TA=5min, 8 respiratory phases). Supplemental 2D real-time images were acquired in 9 different planes. Dynamic locations of the diaphragm dome and left kidney were measured from both 4D and 2D images. The same protocol was also performed on a MRI-compatible motion phantom where the motion was programmed with different amplitude (10–30mm) and frequency (3–10/min). Results: High resolution 4D-MRI were obtained successfully in 5 minutes. Quantitative motion measurements from 4D-MRI agree with the ones from 2D CINE (<5% error). The 4D images are free of the stitching artifacts and their near-isotropic resolution facilitates 3D visualization and segmentation of abdominal organs such as the liver, kidney and pancreas. Conclusion: Our preliminary studies demonstrated a novel ROCK 4D-MRI technique with true 3D k-space encoding and

  17. Distributed computing environments for future space control systems

    NASA Technical Reports Server (NTRS)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  18. High Performance Computing Software Applications for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  19. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    NASA Astrophysics Data System (ADS)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  20. Wave computation on the Poincaré dodecahedral space

    NASA Astrophysics Data System (ADS)

    Bachelot-Motet, Agnès

    2013-12-01

    We compute the waves propagating on a compact 3-manifold of constant positive curvature with a non-trivial topology: the Poincaré dodecahedral space that is a plausible model of multi-connected universe. We transform the Cauchy problem to a mixed problem posed on a fundamental domain determined by the quaternionic calculus. We adopt a variational approach using a space of finite elements that is invariant under the action of the binary icosahedral group. The computation of the transient waves is validated with their spectral analysis by computing a lot of eigenvalues of the Laplace-Beltrami operator.

  1. Method of locating related items in a geometric space for data mining

    DOEpatents

    Hendrickson, B.A.

    1999-07-27

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity. 12 figs.

  2. Method of locating related items in a geometric space for data mining

    DOEpatents

    Hendrickson, Bruce A.

    1999-01-01

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity.

  3. Computational Methods for Frictional Contact With Applications to the Space Shuttle Orbiter Nose-Gear Tire

    NASA Technical Reports Server (NTRS)

    Tanner, John A.

    1996-01-01

    A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.

  4. G3X-K theory: A composite theoretical method for thermochemical kinetics

    NASA Astrophysics Data System (ADS)

    da Silva, Gabriel

    2013-02-01

    A composite theoretical method for accurate thermochemical kinetics, G3X-K, is described. This method is accurate to around 0.5 kcal mol-1 for barrier heights and 0.8 kcal mol-1 for enthalpies of formation. G3X-K is a modification of G3SX theory using the M06-2X density functional for structures and zero-point energies and parameterized for a test set of 223 heats of formation and 23 barrier heights. A reduced perturbation-order variant, G3X(MP3)-K, is also developed, providing around 0.7 kcal mol-1 accuracy for barrier heights and 0.9 kcal mol-1 accuracy for enthalpies, at reduced computational cost. Some opportunities to further improve Gn composite methods are identified and briefly discussed.

  5. Space methods in oceanology

    NASA Technical Reports Server (NTRS)

    Bolshakov, A. A.

    1985-01-01

    The study of Earth from space with specialized satellites, and from manned orbiting stations, has become important in the space programs. The broad complex of methods used for probing Earth from space are different methods of the study of ocean, dynamics. The different methods of ocean observation are described.

  6. Atmospheric effect in three-space scenario for the Stokes-Helmert method of geoid determination

    NASA Astrophysics Data System (ADS)

    Yang, H.; Tenzer, R.; Vanicek, P.; Santos, M.

    2004-05-01

    : According to the Stokes-Helmert method for the geoid determination by Vanicek and Martinec (1994) and Vanicek et al. (1999), the Helmert gravity anomalies are computed at the earth surface. To formulate the fundamental formula of physical geodesy, Helmert's gravity anomalies are then downward continued from the earth surface onto the geoid. This procedure, i.e., the inverse Dirichlet's boundary value problem, is realized by solving the Poisson integral equation. The above mentioned "classical" approach can be modified so that the inverse Dirichlet's boundary value problem is solved in the No Topography (NT) space (Vanicek et al., 2004) instead of in the Helmert (H) space. This technique has been introduced by Vanicek et al. (2003) and was used by Tenzer and Vanicek (2003) for the determination of the geoid in the region of the Canadian Rocky Mountains. According to this new approach, the gravity anomalies referred to the earth surface are first transformed into the NT-space. This transformation is realized by subtracting the gravitational attraction of topographical and atmospheric masses from the gravity anomalies at the earth surface. Since the NT-anomalies are harmonic above the geoid, the Dirichlet boundary value problem is solved in the NT-space instead of the Helmert space according to the standard formulation. After being obtained on the geoid, the NT-anomalies are transformed into the H-space to minimize the indirect effect on the geoidal heights. This step, i.e., transformation from NT-space to H-space is realized by adding the gravitational attraction of condensed topographical and condensed atmospheric masses to the NT-anomalies at the geoid. The effects of atmosphere in the standard Stokes-Helmert method was intensively investigated by Sjöberg (1998 and 1999), and Novák (2000). In this presentation, the effect of the atmosphere in the three-space scenario for the Stokes-Helmert method is discussed and the numerical results over Canada are shown. Key

  7. Biotelemetry and computer analysis of sleep processes on earth and in space.

    NASA Technical Reports Server (NTRS)

    Adey, W. R.

    1972-01-01

    Developments in biomedical engineering now permit study of states of sleep, wakefulness, and focused attention in man exposed to rigorous environments, including aerospace flight. These new sensing devices, data acquisition systems, and computational methods have also been extensively applied to clinical problems of disordered sleep. A 'library' of EEG data has been prepared for sleep in normal man, and characterized for its group features by computational analysis. Sleep in an astronaut in space flight has been examined for the first and second 'nights' of space flight. Normal 90-min cycles were detected during the second night. Sleep patterns in quadriplegic patients deprived of all sensory inputs below the neck have indicated major deviations.

  8. An efficient method for the computation of Legendre moments.

    PubMed

    Yap, Pew-Thian; Paramesran, Raveendran

    2005-12-01

    Legendre moments are continuous moments, hence, when applied to discrete-space images, numerical approximation is involved and error occurs. This paper proposes a method to compute the exact values of the moments by mathematically integrating the Legendre polynomials over the corresponding intervals of the image pixels. Experimental results show that the values obtained match those calculated theoretically, and the image reconstructed from these moments have lower error than that of the conventional methods for the same order. Although the same set of exact Legendre moments can be obtained indirectly from the set of geometric moments, the computation time taken is much longer than the proposed method.

  9. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  10. Space Transportation and the Computer Industry: Learning from the Past

    NASA Technical Reports Server (NTRS)

    Merriam, M. L.; Rasky, D.

    2002-01-01

    Since the space shuttle began flying in 1981, NASA has made a number of attempts to advance the state of the art in space transportation. In spite of billions of dollars invested, and several concerted attempts, no replacement for the shuttle is expected before 2010. Furthermore, the cost of access to space has dropped very slowly over the last two decades. On the other hand, the same two decades have seen dramatic progress in the computer industry. Computational speeds have increased by about a factor of 1000 and available memory, disk space, and network bandwidth has seen similar increases. At the same time, the cost of computing has dropped by about a factor of 10000. Is the space transportation problem simply harder? Or is there something to be learned from the computer industry? In looking for the answers, this paper reviews the early history of NASA's experience with supercomputers and NASA's visionary course change in supercomputer procurement strategy.

  11. Crossing symmetry in alpha space

    NASA Astrophysics Data System (ADS)

    Hogervorst, Matthijs; van Rees, Balt C.

    2017-11-01

    We initiate the study of the conformal bootstrap using Sturm-Liouville theory, specializing to four-point functions in one-dimensional CFTs. We do so by decomposing conformal correlators using a basis of eigenfunctions of the Casimir which are labeled by a complex number α. This leads to a systematic method for computing conformal block decompositions. Analyzing bootstrap equations in alpha space turns crossing symmetry into an eigenvalue problem for an integral operator K. The operator K is closely related to the Wilson transform, and some of its eigenfunctions can be found in closed form.

  12. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    PubMed Central

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-01-01

    Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three

  13. Evaluation of normalization methods for cDNA microarray data by k-NN classification.

    PubMed

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-07-26

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double

  14. Lessons learned in creating spacecraft computer systems: Implications for using Ada (R) for the space station

    NASA Technical Reports Server (NTRS)

    Tomayko, James E.

    1986-01-01

    Twenty-five years of spacecraft onboard computer development have resulted in a better understanding of the requirements for effective, efficient, and fault tolerant flight computer systems. Lessons from eight flight programs (Gemini, Apollo, Skylab, Shuttle, Mariner, Voyager, and Galileo) and three reserach programs (digital fly-by-wire, STAR, and the Unified Data System) are useful in projecting the computer hardware configuration of the Space Station and the ways in which the Ada programming language will enhance the development of the necessary software. The evolution of hardware technology, fault protection methods, and software architectures used in space flight in order to provide insight into the pending development of such items for the Space Station are reviewed.

  15. Space-time VMS computation of wind-turbine rotor and tower aerodynamics

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; McIntyre, Spenser; Kostov, Nikolay; Kolesar, Ryan; Habluetzel, Casey

    2014-01-01

    We present the space-time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent flows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of flows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational flexibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We

  16. Space-Time VMS Computation of Wind-Turbine Rotor and Tower Aerodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Spenser W.

    This thesis is on the space{time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent ows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of ows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational exibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We

  17. A wave superposition method formulated in digital acoustic space

    NASA Astrophysics Data System (ADS)

    Hwang, Yong-Sin

    In this thesis, a new formulation of the Wave Superposition method is proposed wherein the conventional mesh approach is replaced by a simple 3-D digital work space that easily accommodates shape optimization for minimizing or maximizing radiation efficiency. As sound quality is in demand in almost all product designs and also because of fierce competition between product manufacturers, faster and accurate computational method for shape optimization is always desired. Because the conventional Wave Superposition method relies solely on mesh geometry, it cannot accommodate fast shape changes in the design stage of a consumer product or machinery, where many iterations of shape changes are required. Since the use of a mesh hinders easy shape changes, a new approach for representing geometry is introduced by constructing a uniform lattice in a 3-D digital work space. A voxel (a portmanteau, a new word made from combining the sound and meaning, of the words, volumetric and pixel) is essentially a volume element defined by the uniform lattice, and does not require separate connectivity information as a mesh element does. In the presented method, geometry is represented with voxels that can easily adapt to shape changes, therefore it is more suitable for shape optimization. The new method was validated by computing radiated sound power of structures of simple and complex geometries and complex mode shapes. It was shown that matching volume velocity is a key component to an accurate analysis. A sensitivity study showed that it required at least 6 elements per acoustic wavelength, and a complexity study showed a minimal reduction in computational time.

  18. Space Ultrareliable Modular Computer (SUMC) instruction simulator

    NASA Technical Reports Server (NTRS)

    Curran, R. T.

    1972-01-01

    The design principles, description, functional operation, and recommended expansion and enhancements are presented for the Space Ultrareliable Modular Computer interpretive simulator. Included as appendices are the user's manual, program module descriptions, target instruction descriptions, simulator source program listing, and a sample program printout. In discussing the design and operation of the simulator, the key problems involving host computer independence and target computer architectural scope are brought into focus.

  19. Multidimensional Space-Time Methodology for Development of Planetary and Space Sciences, S-T Data Management and S-T Computational Tomography

    NASA Astrophysics Data System (ADS)

    Andonov, Zdravko

    Complex Time and Quan-tum Wave Cosmology Paradigm for Decision of the Main Problem of Contemporary Physics. 3. R&D of Einstein-Minkowski Geodesies' Paradigm in the 4D-Space-Time Continuum to 6D-6nD Space-Time Continuum Paradigms and 6D S-T Equations. . . 4. R&D of Erwin Schrüdinger 4D S-T Universe' Evolutional Equation; It's David Bohm 4D generalization for anisotropic mediums and innovative 6D -for instantaneously quantum measurement -Bohm-Schrüdinger 6D S-T Universe' Evolutional Equation. 5. R&D of brain new 6D Planning of S-T Experi-ments, brain new 6D Space Technicks and Space Technology Generalizations, especially for 6D RS VHRS Research, Monitoring and 6D Computational Tomography. 6. R&D of "6D Euler-Poisson Equations" and "6D Kolmogorov Turbulence Theory" for GeoDynamics and for Space Dynamics as evolution of Gauss-Riemann Paradigms. 7. R&D of N. Boneff NASA RD for Asteroid "Eros" & Space Science' Laws Evolution. 8. R&D of H. Poincare Paradigm for Nature and Cosmos as 6D Group of Transferences. 9. R&D of K. Popoff N-Body General Problem & General Thermodynamic S-T Theory as Einstein-Prigogine-Landau' Paradigms Development. ü 10. R&D of 1st GUT since 1958 by N. S. Kalitzin (Kalitzin N. S., 1958: Uber eine einheitliche Feldtheorie. ZAHeidelberg-ARI, WZHUmnR-B., 7 (2), 207-215) and "Multitemporal Theory of Relativity" -With special applications to Photon Rockets and all Space-Time R&D. GENERAL CONCLUSION: Multidimensional Space-Time Methodology is advance in space research, corresponding to the IAF-IAA-COSPAR Innovative Strategy and R&D Programs -UNEP, UNDP, GEOSS, GMES, Etc.

  20. Universal computer test stand (recommended computer test requirements). [for space shuttle computer evaluation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined.

  1. Space Access for Small Satellites on the K-1

    NASA Astrophysics Data System (ADS)

    Faktor, L.

    Affordable access to space remains a major obstacle to realizing the increasing potential of small satellites systems. On a per kilogram basis, small launch vehicles are simply too expensive for the budgets of many small satellite programs. Opportunities for rideshare with larger payloads on larger launch vehicles are still rare, given the complications associated with coordinating delivery schedules and deployment orbits. Existing contractual mechanisms are also often inadequate to facilitate the launch of multiple payload customers on the same flight. Kistler Aerospace Corporation is committed to lowering the price and enhancing the availability of space access for small satellite programs through the fully-reusable K-1 launch vehicle. Kistler has been working with a number of entities, including Astrium Ltd., AeroAstro, and NASA, to develop innovative approaches to small satellite missions. The K-1 has been selected by NASA as a Flight Demonstration Vehicle for the Space Launch Initiative. NASA has purchased the flight results during the first four K-1 launches on the performance of 13 advanced launch vehicle technologies embedded in the K-1 vehicle. On K-1 flights #2-#4, opportunities exist for small satellites to rideshare to low-earth orbit for a low-launch price. Kistler's flight demonstration contract with NASA also includes options to fly Add-on Technology Experiment flights. Opportunities exist for rideshare payloads on these flights as well. Both commercial and government customers may take advantage of the rideshare pricing. Kistler is investigating the feasibility of flying dedicated, multiple small payload missions. Such a mission would launch multiple small payloads from a single customer or small payloads from different customers. The orbit would be selected to be compatible with the requirements of as many small payload customers as possible, and make use of reusable hardware, standard interfaces (such as the existing MPAS) and verification plans

  2. Computed Tomographic Evaluation of K3 Rotary and Stainless Steel K File Instrumentation in Primary Teeth

    PubMed Central

    Kavitha, Swaminathan; Thomas, Eapen; Anadhan, Vasanthakumari; Vijayakumar, Rajendran

    2016-01-01

    Introduction The intention of root canal preparation is to reduce infected content and create a root canal shape allowing for a well condensed root filling. Therefore, it is not necessary to remove excessive dentine for successful root canal preparation and concern must be taken not to over instrument as perforations can occur in the thin dentinal walls of primary molars. Aim This study was done to evaluate the time preparation, the risk of lateral perforation and dentine removal of the stainless steel K file and K3 rotary instrumentation in primary teeth. Materials and Methods Seventy-five primary molars were selected and divided into three groups. Using spiral computed tomography the teeth were scanned before instrumentation. Teeth were prepared using a stainless steel K file for manual technique. All the canals were prepared up to file size 35. In K3 rotary files (.02 taper) instrumentation was done up to 35 size file. In K3 rotary files (.04 taper) the instrumentation was done up to 25 size file and simultaneously the instrumentation time was recorded. The instrumented teeth were once again scanned and the images were compared with the images of the uninstrumented canals. Statistical Analysis Data was statistically analysed using Kruskal Wallis One-way ANOVA, Mann-Whitney U-Test and Pearson’s Chi-square Test. Results K3 rotary files (.02 taper) removed a significantly less amount of dentine, required less instrumentation time than a stainless steel K file. Conclusion K3 files (.02 taper) generated less dentine removal than the stainless steel K file and K3 files (.04 taper). K3 rotary files (.02 taper) were more effective for root canal instrumentation in primary teeth. PMID:26894166

  3. Computer simulation of space station computer steered high gain antenna

    NASA Technical Reports Server (NTRS)

    Beach, S. W.

    1973-01-01

    The mathematical modeling and programming of a complete simulation program for a space station computer-steered high gain antenna are described. The program provides for reading input data cards, numerically integrating up to 50 first order differential equations, and monitoring up to 48 variables on printed output and on plots. The program system consists of a high gain antenna, an antenna gimbal control system, an on board computer, and the environment in which all are to operate.

  4. Rad-hard computer elements for space applications

    NASA Technical Reports Server (NTRS)

    Krishnan, G. S.; Longerot, Carl D.; Treece, R. Keith

    1993-01-01

    Space Hardened CMOS computer elements emulating a commercial microcontroller and microprocessor family have been designed, fabricated, qualified, and delivered for a variety of space programs including NASA's multiple launch International Solar-Terrestrial Physics (ISTP) program, Mars Observer, and government and commercial communication satellites. Design techniques and radiation performance of the 1.25 micron feature size products are described.

  5. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  6. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  7. Initial Test Results from a 6 K-10 K Turbo-Brayton Cryocooler for Space Applications

    NASA Astrophysics Data System (ADS)

    Swift, W. L.; Zagarola, M. V.; Breedlove, J. J.; McCormick, J. A.; Sixsmith, H.

    2004-06-01

    In March 2002, a single-stage turbo-Brayton cryocooler was installed on the Hubble Space Telescope (HST) to re-establish cooling to the detectors in the Near Infrared Camera and Multi-Object Spectrograph (NICMOS). The system has maintained the detectors at their operating temperature near 77 K since that time. Future NASA space missions require comparable low-vibration cooling for periods of five to ten years in the 6 K-10 K temperature range. Creare is extending the NICMOS cryocooler technology to meet these lower temperatures. The primary activities address the need for smaller turbomachines. Two helium compressors for a 6 K turbo-Brayton cycle have been developed and tested in a cryogenic test facility. They have met performance goals at design speeds of about 9,500 rev/s. A miniature, dual-temperature high specific speed turboalternator has been installed in this test facility and has been used to obtain extended operational life data during low temperature cryogenic tests. A smaller, low specific speed turboalternator using advanced gas bearings is under development to replace the original dual-temperature design. This machine should provide improvements in the thermodynamic performance of the cycle. This paper presents life test results for the low temperature system and discusses the development of the smaller turboalternator.

  8. High performance flight computer developed for deep space applications

    NASA Technical Reports Server (NTRS)

    Bunker, Robert L.

    1993-01-01

    The development of an advanced space flight computer for real time embedded deep space applications which embodies the lessons learned on Galileo and modern computer technology is described. The requirements are listed and the design implementation that meets those requirements is described. The development of SPACE-16 (Spaceborne Advanced Computing Engine) (where 16 designates the databus width) was initiated to support the MM2 (Marine Mark 2) project. The computer is based on a radiation hardened emulation of a modern 32 bit microprocessor and its family of support devices including a high performance floating point accelerator. Additional custom devices which include a coprocessor to improve input/output capabilities, a memory interface chip, and an additional support chip that provide management of all fault tolerant features, are described. Detailed supporting analyses and rationale which justifies specific design and architectural decisions are provided. The six chip types were designed and fabricated. Testing and evaluation of a brass/board was initiated.

  9. Rank-k modification methods for recursive least squares problems

    NASA Astrophysics Data System (ADS)

    Olszanskyj, Serge; Lebak, James; Bojanczyk, Adam

    1994-09-01

    In least squares problems, it is often desired to solve the same problem repeatedly but with several rows of the data either added, deleted, or both. Methods for quickly solving a problem after adding or deleting one row of data at a time are known. In this paper we introduce fundamental rank-k updating and downdating methods and show how extensions of rank-1 downdating methods based on LINPACK, Corrected Semi-Normal Equations (CSNE), and Gram-Schmidt factorizations, as well as new rank-k downdating methods, can all be derived from these fundamental results. We then analyze the cost of each new algorithm and make comparisons tok applications of the corresponding rank-1 algorithms. We provide experimental results comparing the numerical accuracy of the various algorithms, paying particular attention to the downdating methods, due to their potential numerical difficulties for ill-conditioned problems. We then discuss the computation involved for each downdating method, measured in terms of operation counts and BLAS calls. Finally, we provide serial execution timing results for these algorithms, noting preferable points for improvement and optimization. From our experiments we conclude that the Gram-Schmidt methods perform best in terms of numerical accuracy, but may be too costly for serial execution for large problems.

  10. Efficient computation of k-Nearest Neighbour Graphs for large high-dimensional data sets on GPU clusters.

    PubMed

    Dashti, Ali; Komarov, Ivan; D'Souza, Roshan M

    2013-01-01

    This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG) construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs) and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU). The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.

  11. The expanded role of computers in Space Station Freedom real-time operations

    NASA Technical Reports Server (NTRS)

    Crawford, R. Paul; Cannon, Kathleen V.

    1990-01-01

    The challenges that NASA and its international partners face in their real-time operation of the Space Station Freedom necessitate an increased role on the part of computers. In building the operational concepts concerning the role of the computer, the Space Station program is using lessons learned experience from past programs, knowledge of the needs of future space programs, and technical advances in the computer industry. The computer is expected to contribute most significantly in real-time operations by forming a versatile operating architecture, a responsive operations tool set, and an environment that promotes effective and efficient utilization of Space Station Freedom resources.

  12. Exploring Space on the Computer

    NASA Technical Reports Server (NTRS)

    Bozym, Patrick

    2004-01-01

    For the past year Dennis Stocker has been in the process of developing pencil and paper games, which are fun, challenging, and educational for middle school and high school students. The latest version of these pencil and paper games is Spaceship Commander. The objective of the game is to earn points by plotting the flight path of a spaceship so astronauts can perform microgravity experiments, and make short-range measurements of other planets. During my ten weeks here at the GRC my goal is to create a computer based version of Spaceship commander. During the development of this game the primary focus has been on making it as educational and fun for the student as possible. The main educational objective of this game is to give students an understanding of forces and motion, including gravity. This is done by incorporating Newton's laws into the game. For example a spacecraft in the video game experiences a gravitational force applied to it by planets. The software I am using to create this game is a freeware application called Game Maker. Game Maker allows novice computer programmers like me to create arcade style games using a visual drag and drop interface. By using functions provided by Game Maker and a few I have written myself, I have been able to create a few simple computer games. Currently the computer game allows the student to navigate a space ship around planets, and asteroids by using the arrow keys on the numeric keypad. Each time an arrow key is pressed by the student the corresponding acceleration of the space ship is seen on the screen. Points are earned by navigating the space ship close enough to planets to gather scientific data. However the game encourages the student to plan his or her course carefully, because if the student gets too close to a planet they may not be able to escape the planet s gravity, and crash into the planet. The next step in the game development is to include a launch sequence which allows the student to launch from

  13. Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2006-01-01

    Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.

  14. A new smooth-k space filter approach to calculate halo abundances

    NASA Astrophysics Data System (ADS)

    Leo, Matteo; Baugh, Carlton M.; Li, Baojiu; Pascoli, Silvia

    2018-04-01

    We propose a new filter, a smooth-k space filter, to use in the Press-Schechter approach to model the dark matter halo mass function which overcomes shortcomings of other filters. We test this against the mass function measured in N-body simulations. We find that the commonly used sharp-k filter fails to reproduce the behaviour of the halo mass function at low masses measured from simulations of models with a sharp truncation in the linear power spectrum. We show that the predictions with our new filter agree with the simulation results over a wider range of halo masses for both damped and undamped power spectra than is the case with the sharp-k and real-space top-hat filters.

  15. Reachability in K 3,3-Free Graphs and K 5-Free Graphs Is in Unambiguous Log-Space

    NASA Astrophysics Data System (ADS)

    Thierauf, Thomas; Wagner, Fabian

    We show that the reachability problem for directed graphs that are either K 3,3-free or K 5-free is in unambiguous log-space, UL ∩ coUL. This significantly extends the result of Bourke, Tewari, and Vinodchandran that the reachability problem for directed planar graphs is in UL ∩ coUL.

  16. A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Willert, Jeffrey; Park, H.; Knoll, D. A.

    2014-10-01

    Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.

  17. Interactive visualization of Earth and Space Science computations

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.; Paul, Brian E.; Santek, David A.; Dyer, Charles R.; Battaiola, Andre L.; Voidrot-Martinez, Marie-Francoise

    1994-01-01

    Computers have become essential tools for scientists simulating and observing nature. Simulations are formulated as mathematical models but are implemented as computer algorithms to simulate complex events. Observations are also analyzed and understood in terms of mathematical models, but the number of these observations usually dictates that we automate analyses with computer algorithms. In spite of their essential role, computers are also barriers to scientific understanding. Unlike hand calculations, automated computations are invisible and, because of the enormous numbers of individual operations in automated computations, the relation between an algorithm's input and output is often not intuitive. This problem is illustrated by the behavior of meteorologists responsible for forecasting weather. Even in this age of computers, many meteorologists manually plot weather observations on maps, then draw isolines of temperature, pressure, and other fields by hand (special pads of maps are printed for just this purpose). Similarly, radiologists use computers to collect medical data but are notoriously reluctant to apply image-processing algorithms to that data. To these scientists with life-and-death responsibilities, computer algorithms are black boxes that increase rather than reduce risk. The barrier between scientists and their computations can be bridged by techniques that make the internal workings of algorithms visible and that allow scientists to experiment with their computations. Here we describe two interactive systems developed at the University of Wisconsin-Madison Space Science and Engineering Center (SSEC) that provide these capabilities to Earth and space scientists.

  18. Outreach to Space Scientists Interested in K-12 Education

    NASA Technical Reports Server (NTRS)

    Morrow, Cherilynn A

    1998-01-01

    This is the final report for work on outreach to space scientists interested in k-12 education. It outlines what was accomplished during the two years of support and one year no-cost extension (October 1995-September 1998).

  19. Emulation: A fast stochastic Bayesian method to eliminate model space

    NASA Astrophysics Data System (ADS)

    Roberts, Alan; Hobbs, Richard; Goldstein, Michael

    2010-05-01

    Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much

  20. Instructional computing in space physics moves ahead

    NASA Astrophysics Data System (ADS)

    Russell, C. T.; Omidi, N.

    As the number of spacecraft stationed in the Earth's magnetosphere exponentiates and society becomes more technologically sophisticated and dependent on these spacebased resources, both the importance of space physics and the need to train people in this field will increase.Space physics is a very difficult subject for students to master. Both mechanical and electromagnetic forces are important. The treatment of problems can be very mathematical, and the scale sizes of phenomena are usually such that laboratory studies become impossible, and experimentation, when possible at all, must be carried out in deep space. Fortunately, computers have evolved to the point that they are able to greatly facilitate instruction in space physics.

  1. Exact Dispersion Study of an Asymmetric Thin Planar Slab Dielectric Waveguide without Computing {d^2}β/{d{k^2}} Numerically

    NASA Astrophysics Data System (ADS)

    Raghuwanshi, Sanjeev Kumar; Palodiya, Vikram

    2017-08-01

    Waveguide dispersion can be tailored but not the material dispersion. Hence, the total dispersion can be shifted at any desired band by adjusting the waveguide dispersion. Waveguide dispersion is proportional to {d^2}β/d{k^2} and need to be computed numerically. In this paper, we have tried to compute analytical expression for {d^2}β/d{k^2} in terms of {d^2}β/d{k^2} accurately with numerical technique, ≈ 10^{-5} decimal point. This constraint sometimes generates the error in calculation of waveguide dispersion. To formulate the problem we will use the graphical method. Our study reveals that we can compute the waveguide dispersion enough accurately for various modes by knowing - β only.

  2. Calorimetric thermal-vacuum performance characterization of the BAe 80 K space cryocooler

    NASA Technical Reports Server (NTRS)

    Kotsubo, V. Y.; Johnson, D. L.; Ross, R. G., Jr.

    1992-01-01

    A comprehensive characterization program is underway at JPL to generate test data on long-life, miniature Stirling-cycle cryocoolers for space application. The key focus of this paper is on the thermal performance of the British Aerospace (BAe) 80 K split-Stirling-cycle cryocooler as measured in a unique calorimetric thermal-vacuum test chamber that accurately simulates the heat-transfer interfaces of space. Two separate cooling fluid loops provide precise individual control of the compressor and displacer heatsink temperatures. In addition, heatflow transducers enable calorimetric measurements of the heat rejected separately by the compressor and displacer. Cooler thermal performance has been mapped for coldtip temperatures ranging from below 45 K to above 150 K, for heatsink temperatures ranging from 280 K to 320 K, and for a wide variety of operational variables including compressor-displacer phase, compressor-displacer stroke, drive frequency, and piston-displacer dc offset.

  3. Optical Computers and Space Technology

    NASA Technical Reports Server (NTRS)

    Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela

    1995-01-01

    The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.

  4. An urban area minority outreach program for K-6 children in space science

    NASA Astrophysics Data System (ADS)

    Morris, P.; Garza, O.; Lindstrom, M.; Allen, J.; Wooten, J.; Sumners, C.; Obot, V.

    The Houston area has minority populations with significant school dropout rates. This is similar to other major cities in the United States and elsewhere in the world where there are significant minority populations from rural areas. The student dropout rates are associated in many instances with the absence of educational support opportuni- ties either from the school and/or from the family. This is exacerbated if the student has poor English language skills. To address this issue, a NASA minority university initiative enabled us to develop a broad-based outreach program that includes younger children and their parents at a primarily Hispanic inner city charter school. The pro- gram at the charter school was initiated by teaching computer skills to the older chil- dren, who in turn taught parents. The older children were subsequently asked to help teach a computer literacy class for mothers with 4-5 year old children. The computers initially intimidated the mothers as most had limited educational backgrounds and En- glish language skills. To practice their newly acquired computer skills and learn about space science, the mothers and their children were asked to pick a space project and investigate it using their computer skills. The mothers and their children decided to learn about black holes. The project included designing space suits for their children so that they could travel through space and observe black holes from a closer proxim- ity. The children and their mothers learned about computers and how to use them for educational purposes. In addition, they learned about black holes and the importance of space suits in protecting astronauts as they investigated space. The parents are proud of their children and their achievements. By including the parents in the program, they have a greater understanding of the importance of their children staying in school and the opportunities for careers in space science and technology. For more information on our overall

  5. Nonunitary quantum computation in the ground space of local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Usher, Naïri; Hoban, Matty J.; Browne, Dan E.

    2017-09-01

    A central result in the study of quantum Hamiltonian complexity is that the k -local Hamiltonian problem is quantum-Merlin-Arthur-complete. In that problem, we must decide if the lowest eigenvalue of a Hamiltonian is bounded below some value, or above another, promised one of these is true. Given the ground state of the Hamiltonian, a quantum computer can determine this question, even if the ground state itself may not be efficiently quantum preparable. Kitaev's proof of QMA-completeness encodes a unitary quantum circuit in QMA into the ground space of a Hamiltonian. However, we now have quantum computing models based on measurement instead of unitary evolution; furthermore, we can use postselected measurement as an additional computational tool. In this work, we generalize Kitaev's construction to allow for nonunitary evolution including postselection. Furthermore, we consider a type of postselection under which the construction is consistent, which we call tame postselection. We consider the computational complexity consequences of this construction and then consider how the probability of an event upon which we are postselecting affects the gap between the ground-state energy and the energy of the first excited state of its corresponding Hamiltonian. We provide numerical evidence that the two are not immediately related by giving a family of circuits where the probability of an event upon which we postselect is exponentially small, but the gap in the energy levels of the Hamiltonian decreases as a polynomial.

  6. Computational methods for diffusion-influenced biochemical reactions.

    PubMed

    Dobrzynski, Maciej; Rodríguez, Jordi Vidal; Kaandorp, Jaap A; Blom, Joke G

    2007-08-01

    We compare stochastic computational methods accounting for space and discrete nature of reactants in biochemical systems. Implementations based on Brownian dynamics (BD) and the reaction-diffusion master equation are applied to a simplified gene expression model and to a signal transduction pathway in Escherichia coli. In the regime where the number of molecules is small and reactions are diffusion-limited predicted fluctuations in the product number vary between the methods, while the average is the same. Computational approaches at the level of the reaction-diffusion master equation compute the same fluctuations as the reference result obtained from the particle-based method if the size of the sub-volumes is comparable to the diameter of reactants. Using numerical simulations of reversible binding of a pair of molecules we argue that the disagreement in predicted fluctuations is due to different modeling of inter-arrival times between reaction events. Simulations for a more complex biological study show that the different approaches lead to different results due to modeling issues. Finally, we present the physical assumptions behind the mesoscopic models for the reaction-diffusion systems. Input files for the simulations and the source code of GMP can be found under the following address: http://www.cwi.nl/projects/sic/bioinformatics2007/

  7. Space and Ground Trades for Human Exploration and Wearable Computing

    NASA Technical Reports Server (NTRS)

    Lupisella, Mark; Donohue, John; Mandl, Dan; Ly, Vuong; Graves, Corey; Heimerdinger, Dan; Studor, George; Saiz, John; DeLaune, Paul; Clancey, William

    2006-01-01

    Human exploration of the Moon and Mars will present unique trade study challenges as ground system elements shift to planetary bodies and perhaps eventually to the bodies of human explorers in the form of wearable computing technologies. This presentation will highlight some of the key space and ground trade issues that will face the Exploration Initiative as NASA begins designing systems for the sustained human exploration of the Moon and Mars, with an emphasis on wearable computing. We will present some preliminary test results and scenarios that demonstrate how wearable computing might affect the trade space noted below. We will first present some background on wearable computing and its utility to NASA's Exploration Initiative. Next, we will discuss three broad architectural themes, some key ground and space trade issues within those themes and how they relate to wearable computing. Lastly, we will present some preliminary test results and suggest guidance for proceeding in the assessment and creation of a value-added role for wearable computing in the Exploration Initiative. The three broad ground-space architectural trade themes we will discuss are: 1. Functional Shift and Distribution: To what extent, if any, should traditional ground system functionality be shifted to, and distributed among, the Earth, Moon/Mars, and the human. explorer? 2. Situational Awareness and Autonomy: How much situational awareness (e.g. environmental conditions, biometrics, etc.) and autonomy is required and desired, and where should these capabilities reside? 3. Functional Redundancy: What functions (e.g. command, control, analysis) should exist simultaneously on Earth, the Moon/Mars, and the human explorer? These three themes can serve as the axes of a three-dimensional trade space, within which architectural solutions reside. We will show how wearable computers can fit into this trade space and what the possible implications could be for the rest of the ground and space

  8. A Phase-Space Approach to Collisionless Stellar Systems Using a Particle Method

    NASA Astrophysics Data System (ADS)

    Hozumi, Shunsuke

    1997-10-01

    A particle method for reproducing the phase space of collisionless stellar systems is described. The key idea originates in Liouville's theorem, which states that the distribution function (DF) at time t can be derived from tracing necessary orbits back to t = 0. To make this procedure feasible, a self-consistent field (SCF) method for solving Poisson's equation is adopted to compute the orbits of arbitrary stars. As an example, for the violent relaxation of a uniform density sphere, the phase-space evolution generated by the current method is compared to that obtained with a phase-space method for integrating the collisionless Boltzmann equation, on the assumption of spherical symmetry. Excellent agreement is found between the two methods if an optimal basis set for the SCF technique is chosen. Since this reproduction method requires only the functional form of initial DFs and does not require any assumptions to be made about the symmetry of the system, success in reproducing the phase-space evolution implies that there would be no need of directly solving the collisionless Boltzmann equation in order to access phase space even for systems without any special symmetries. The effects of basis sets used in SCF simulations on the reproduced phase space are also discussed.

  9. Testing of 100 mK bolometers for space applications

    NASA Technical Reports Server (NTRS)

    Murray, A. G.; Ade, P. A. R.; Bhatia, R. S.; Griffin, M. J.; Maffei, B.; Nartallo, R.; Beeman, J. W.; Bock, J.; Lange, A.; DelCastillo, H.

    1996-01-01

    Electrical and optical performance data are presented for a prototype 100 mK spider-web bolometer operating under very low photon backgrounds. These data are compared with the bolometer theory and are used to estimate the expected sensitivity of such a detector used for low background space astronomy. The results demonstrate that the sensitivity and speed of response requirements of the bolometer instruments proposed for these missions can be met by 100 mK spider-web bolometers using neutron transmutation doped germanium as the temperature sensitive element.

  10. A Simple but Powerful Heuristic Method for Accelerating k-Means Clustering of Large-Scale Data in Life Science.

    PubMed

    Ichikawa, Kazuki; Morishita, Shinichi

    2014-01-01

    K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.

  11. Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.

    PubMed

    Xiao, Dan; Balcom, Bruce J

    2012-07-01

    Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Twenty Years of Rad-Hard K14 SPAD in Space Projects

    PubMed Central

    Michálek, Vojtěch; Procházka, Ivan; Blažej, Josef

    2015-01-01

    During last two decades, several photon counting detectors have been developed in our laboratory. One of the most promising detector coming from our group silicon K14 Single Photon Avalanche Diode (SPAD) is presented with its valuable features and space applications. Based on the control electronics, it can be operated in both gated and non-gated mode. Although it was designed for photon counting detection, it can be employed for multiphoton detection as well. With respect to control electronics employed, the timing jitter can be as low as 20 ps RMS. Detection efficiency is about 40 % in range of 500 nm to 800 nm. The detector including gating and quenching circuitry has outstanding timing stability. Due to its radiation resistivity, the diode withstands 100 krad gamma ray dose without parameters degradation. Single photon detectors based on K14 SPAD were used for planetary altimeter and atmospheric lidar in MARS92/96 and Mars Surveyor ’98 space projects, respectively. Recent space applications of K14 SPAD comprises LIDAR and mainly time transfer between ground stations and artificial satellites. These include Laser Time Transfer, Time Transfer by Laser Link, and European Laser Timing projects. PMID:26213945

  13. Computational Exploration of a Protein Receptor Binding Space with Student Proposed Peptide Ligands

    ERIC Educational Resources Information Center

    King, Matthew D.; Phillips, Paul; Turner, Matthew W.; Katz, Michael; Lew, Sarah; Bradburn, Sarah; Andersen, Tim; McDougal, Owen M.

    2016-01-01

    Computational molecular docking is a fast and effective "in silico" method for the analysis of binding between a protein receptor model and a ligand. The visualization and manipulation of protein to ligand binding in three-dimensional space represents a powerful tool in the biochemistry curriculum to enhance student learning. The…

  14. H-EtICT-K8 (Health Education through ICT for K-8): Computers and Your Health

    ERIC Educational Resources Information Center

    Coklar, A. Naci; Sendag, Serkan; Eristi, S. Duygu

    2007-01-01

    This paper concentrates on a software prepared as a series of Health Education for K8 students in Turkey. Bearing in mind that healthy mind rests in a healthy body, the researchers prepared a series of software on different aspects of health. This specific software tries to donate the K8 students with healthy use of computers in everyday life.…

  15. Application of computational aerodynamics methods to the design and analysis of transport aircraft

    NASA Technical Reports Server (NTRS)

    Da Costa, A. L.

    1978-01-01

    The application and validation of several computational aerodynamic methods in the design and analysis of transport aircraft is established. An assessment is made concerning more recently developed methods that solve three-dimensional transonic flow and boundary layers on wings. Capabilities of subsonic aerodynamic methods are demonstrated by several design and analysis efforts. Among the examples cited are the B747 Space Shuttle Carrier Aircraft analysis, nacelle integration for transport aircraft, and winglet optimization. The accuracy and applicability of a new three-dimensional viscous transonic method is demonstrated by comparison of computed results to experimental data

  16. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  17. Low-cost space-varying FIR filter architecture for computational imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Guotong; Shoaib, Mohammed; Schwartz, Edward L.; Dirk Robinson, M.

    2010-01-01

    Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system requires low-complexity algorithms enabling space-varying sharpening. In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost space-varying FIR filter architecture.

  18. Methods of space radiation dose analysis with applications to manned space systems

    NASA Technical Reports Server (NTRS)

    Langley, R. W.; Billings, M. P.

    1972-01-01

    The full potential of state-of-the-art space radiation dose analysis for manned missions has not been exploited. Point doses have been overemphasized, and the critical dose to the bone marrow has been only crudely approximated, despite the existence of detailed man models and computer codes for dose integration in complex geometries. The method presented makes it practical to account for the geometrical detail of the astronaut as well as the vehicle. Discussed are the major assumptions involved and the concept of applying the results of detailed proton dose analysis to the real-time interpretation of on-board dosimetric measurements.

  19. An Efficient Numerical Method for Computing Synthetic Seismograms for a Layered Half-space with Sources and Receivers at Close or Same Depths

    NASA Astrophysics Data System (ADS)

    Zhang, H.-m.; Chen, X.-f.; Chang, S.

    - It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.

  20. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  1. Stabilization and discontinuity-capturing parameters for space-time flow computations with finite element and isogeometric discretizations

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Otoguro, Yuto

    2018-04-01

    Stabilized methods, which have been very common in flow computations for many years, typically involve stabilization parameters, and discontinuity-capturing (DC) parameters if the method is supplemented with a DC term. Various well-performing stabilization and DC parameters have been introduced for stabilized space-time (ST) computational methods in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible and compressible flows. These parameters were all originally intended for finite element discretization but quite often used also for isogeometric discretization. The stabilization and DC parameters we present here for ST computations are in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible flows, target isogeometric discretization, and are also applicable to finite element discretization. The parameters are based on a direction-dependent element length expression. The expression is outcome of an easy to understand derivation. The key components of the derivation are mapping the direction vector from the physical ST element to the parent ST element, accounting for the discretization spacing along each of the parametric coordinates, and mapping what we have in the parent element back to the physical element. The test computations we present for pure-advection cases show that the parameters proposed result in good solution profiles.

  2. Metal artifact correction for x-ray computed tomography using kV and selective MV imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Meng, E-mail: mengwu@stanford.edu; Keil, Andreas; Constantin, Dragos

    Purpose: The overall goal of this work is to improve the computed tomography (CT) image quality for patients with metal implants or fillings by completing the missing kilovoltage (kV) projection data with selectively acquired megavoltage (MV) data that do not suffer from photon starvation. When both of these imaging systems, which are available on current radiotherapy devices, are used, metal streak artifacts are avoided, and the soft-tissue contrast is restored, even for regions in which the kV data cannot contribute any information. Methods: Three image-reconstruction methods, including two filtered back-projection (FBP)-based analytic methods and one iterative method, for combining kVmore » and MV projection data from the two on-board imaging systems of a radiotherapy device are presented in this work. The analytic reconstruction methods modify the MV data based on the information in the projection or image domains and then patch the data onto the kV projections for a FBP reconstruction. In the iterative reconstruction, the authors used dual-energy (DE) penalized weighted least-squares (PWLS) methods to simultaneously combine the kV/MV data and perform the reconstruction. Results: The authors compared kV/MV reconstructions to kV-only reconstructions using a dental phantom with fillings and a hip-implant numerical phantom. Simulation results indicated that dual-energy sinogram patch FBP and the modified dual-energy PWLS method can successfully suppress metal streak artifacts and restore information lost due to photon starvation in the kV projections. The root-mean-square errors of soft-tissue patterns obtained using combined kV/MV data are 10–15 Hounsfield units smaller than those of the kV-only images, and the structural similarity index measure also indicates a 5%–10% improvement in the image quality. The added dose from the MV scan is much less than the dose from the kV scan if a high efficiency MV detector is assumed. Conclusions: The authors have shown

  3. Nontraditional method for determining unperturbed orbits of unknown space objects using incomplete optical observational data

    NASA Astrophysics Data System (ADS)

    Perov, N. I.

    1985-02-01

    A physical-geometrical method for computing the orbits of earth satellites on the basis of an inadequate number of angular observations (N3) was developed. Specifically, a new method has been developed for calculating the elements of Keplerian orbits of unidentified artificial satellites using two angular observations (alpha sub k, S sub k, k = 1). The first section gives procedures for determining the topocentric distance to AES on the basis of one optical observation. This is followed by description of a very simple method for determining unperturbed orbits using two satellite position vectors and a time interval which is applicable even in the case of antiparallel AED position vectors, a method designated the R sub 2 iterations method.

  4. Hamiltonian lattice field theory: Computer calculations using variational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zako, Robert L.

    1991-12-03

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less

  5. Evolving technologies for Space Station Freedom computer-based workstations

    NASA Technical Reports Server (NTRS)

    Jensen, Dean G.; Rudisill, Marianne

    1990-01-01

    Viewgraphs on evolving technologies for Space Station Freedom computer-based workstations are presented. The human-computer computer software environment modules are described. The following topics are addressed: command and control workstation concept; cupola workstation concept; Japanese experiment module RMS workstation concept; remote devices controlled from workstations; orbital maneuvering vehicle free flyer; remote manipulator system; Japanese experiment module exposed facility; Japanese experiment module small fine arm; flight telerobotic servicer; human-computer interaction; and workstation/robotics related activities.

  6. Space shuttle main engine computed tomography applications

    NASA Technical Reports Server (NTRS)

    Sporny, Richard F.

    1990-01-01

    For the past two years the potential applications of computed tomography to the fabrication and overhaul of the Space Shuttle Main Engine were evaluated. Application tests were performed at various government and manufacturer facilities with equipment produced by four different manufacturers. The hardware scanned varied in size and complexity from a small temperature sensor and turbine blades to an assembled heat exchanger and main injector oxidizer inlet manifold. The evaluation of capabilities included the ability to identify and locate internal flaws, measure the depth of surface cracks, measure wall thickness, compare manifold design contours to actual part contours, perform automatic dimensional inspections, generate 3D computer models of actual parts, and image the relationship of the details in a complex assembly. The capabilities evaluated, with the exception of measuring the depth of surface flaws, demonstrated the existing and potential ability to perform many beneficial Space Shuttle Main Engine applications.

  7. Applicability of 100kWe-class of space reactor power systems to NASA manned space station missions

    NASA Technical Reports Server (NTRS)

    Silverman, S. W.; Willenberg, H. J.; Robertson, C.

    1985-01-01

    An assessment is made of a manned space station operating with sufficiently high power demands to require a multihundred kilowatt range electrical power system. The nuclear reactor is a competitor for supplying this power level. Load levels were selected at 150kWe and 300kWe. Interactions among the reactor electrical power system, the manned space station, the space transportation system, and the mission were evaluated. The reactor shield and the conversion equipment were assumed to be in different positions with respect to the station; on board, tethered, and on a free flyer platform. Mission analyses showed that the free flyer concept resulted in unacceptable costs and technical problems. The tethered reactor providing power to an electrolyzer for regenerative fuel cells on the space station, results in a minimum weight shield and can be designed to release the reactor power section so that it moves to a high altitude orbit where the decay period is at least 300 years. Placing the reactor on the station, on a structural boom is an attractive design, but heavier than the long tethered reactor design because of the shield weight for manned activity near the reactor.

  8. Design of a k-space spectrometer for ultra-broad waveband spectral domain optical coherence tomography

    PubMed Central

    Lan, Gongpu; Li, Guoqiang

    2017-01-01

    Nonlinear sampling of the interferograms in wavenumber (k) space degrades the depth-dependent signal sensitivity in conventional spectral domain optical coherence tomography (SD-OCT). Here we report a linear-in-wavenumber (k-space) spectrometer for an ultra-broad bandwidth (760 nm–920 nm) SD-OCT, whereby a combination of a grating and a prism serves as the dispersion group. Quantitative ray tracing is applied to optimize the linearity and minimize the optical path differences for the dispersed wavenumbers. Zemax simulation is used to fit the point spread functions to the rectangular shape of the pixels of the line-scan camera and to improve the pixel sampling rates. An experimental SD-OCT is built to test and compare the performance of the k-space spectrometer with that of a conventional one. Design results demonstrate that this k-space spectrometer can reduce the nonlinearity error in k-space from 14.86% to 0.47% (by approximately 30 times) compared to the conventional spectrometer. The 95% confidence interval for RMS diameters is 5.48 ± 1.76 μm—significantly smaller than both the pixel size (14 μm × 28 μm) and the Airy disc (25.82 μm in diameter, calculated at the wavenumber of 7.548 μm−1). Test results demonstrate that the fall-off curve from the k-space spectrometer exhibits much less decay (maximum as −5.20 dB) than the conventional spectrometer (maximum as –16.84 dB) over the whole imaging depth (2.2 mm). PMID:28266502

  9. Design of a k-space spectrometer for ultra-broad waveband spectral domain optical coherence tomography.

    PubMed

    Lan, Gongpu; Li, Guoqiang

    2017-03-07

    Nonlinear sampling of the interferograms in wavenumber (k) space degrades the depth-dependent signal sensitivity in conventional spectral domain optical coherence tomography (SD-OCT). Here we report a linear-in-wavenumber (k-space) spectrometer for an ultra-broad bandwidth (760 nm-920 nm) SD-OCT, whereby a combination of a grating and a prism serves as the dispersion group. Quantitative ray tracing is applied to optimize the linearity and minimize the optical path differences for the dispersed wavenumbers. Zemax simulation is used to fit the point spread functions to the rectangular shape of the pixels of the line-scan camera and to improve the pixel sampling rates. An experimental SD-OCT is built to test and compare the performance of the k-space spectrometer with that of a conventional one. Design results demonstrate that this k-space spectrometer can reduce the nonlinearity error in k-space from 14.86% to 0.47% (by approximately 30 times) compared to the conventional spectrometer. The 95% confidence interval for RMS diameters is 5.48 ± 1.76 μm-significantly smaller than both the pixel size (14 μm × 28 μm) and the Airy disc (25.82 μm in diameter, calculated at the wavenumber of 7.548 μm -1 ). Test results demonstrate that the fall-off curve from the k-space spectrometer exhibits much less decay (maximum as -5.20 dB) than the conventional spectrometer (maximum as -16.84 dB) over the whole imaging depth (2.2 mm).

  10. Design of a k-space spectrometer for ultra-broad waveband spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lan, Gongpu; Li, Guoqiang

    2017-03-01

    Nonlinear sampling of the interferograms in wavenumber (k) space degrades the depth-dependent signal sensitivity in conventional spectral domain optical coherence tomography (SD-OCT). Here we report a linear-in-wavenumber (k-space) spectrometer for an ultra-broad bandwidth (760 nm-920 nm) SD-OCT, whereby a combination of a grating and a prism serves as the dispersion group. Quantitative ray tracing is applied to optimize the linearity and minimize the optical path differences for the dispersed wavenumbers. Zemax simulation is used to fit the point spread functions to the rectangular shape of the pixels of the line-scan camera and to improve the pixel sampling rates. An experimental SD-OCT is built to test and compare the performance of the k-space spectrometer with that of a conventional one. Design results demonstrate that this k-space spectrometer can reduce the nonlinearity error in k-space from 14.86% to 0.47% (by approximately 30 times) compared to the conventional spectrometer. The 95% confidence interval for RMS diameters is 5.48 ± 1.76 μm—significantly smaller than both the pixel size (14 μm × 28 μm) and the Airy disc (25.82 μm in diameter, calculated at the wavenumber of 7.548 μm-1). Test results demonstrate that the fall-off curve from the k-space spectrometer exhibits much less decay (maximum as -5.20 dB) than the conventional spectrometer (maximum as -16.84 dB) over the whole imaging depth (2.2 mm).

  11. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    The implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, an SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the FLEX/32 and CRAY/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  12. Computer image generation: Reconfigurability as a strategy in high fidelity space applications

    NASA Technical Reports Server (NTRS)

    Bartholomew, Michael J.

    1989-01-01

    The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.

  13. Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Doyle, Richard; Bergman, Larry; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael

    2013-01-01

    Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and mission. Onboard computing can be aptly viewed as a "technology multiplier" in that advances provide direct dramatic improvements in flight functions and capabilities across the NASA mission classes, and enable new flight capabilities and mission scenarios, increasing science and exploration return. Space-qualified computing technology, however, has not advanced significantly in well over ten years and the current state of the practice fails to meet the near- to mid-term needs of NASA missions. Recognizing this gap, the NASA Game Changing Development Program (GCDP), under the auspices of the NASA Space Technology Mission Directorate, commissioned a study on space-based computing needs, looking out 15-20 years. The study resulted in a recommendation to pursue high-performance spaceflight computing (HPSC) for next-generation missions, and a decision to partner with the Air Force Research Lab (AFRL) in this development.

  14. Metal artifact correction for x-ray computed tomography using kV and selective MV imaging.

    PubMed

    Wu, Meng; Keil, Andreas; Constantin, Dragos; Star-Lack, Josh; Zhu, Lei; Fahrig, Rebecca

    2014-12-01

    The overall goal of this work is to improve the computed tomography (CT) image quality for patients with metal implants or fillings by completing the missing kilovoltage (kV) projection data with selectively acquired megavoltage (MV) data that do not suffer from photon starvation. When both of these imaging systems, which are available on current radiotherapy devices, are used, metal streak artifacts are avoided, and the soft-tissue contrast is restored, even for regions in which the kV data cannot contribute any information. Three image-reconstruction methods, including two filtered back-projection (FBP)-based analytic methods and one iterative method, for combining kV and MV projection data from the two on-board imaging systems of a radiotherapy device are presented in this work. The analytic reconstruction methods modify the MV data based on the information in the projection or image domains and then patch the data onto the kV projections for a FBP reconstruction. In the iterative reconstruction, the authors used dual-energy (DE) penalized weighted least-squares (PWLS) methods to simultaneously combine the kV/MV data and perform the reconstruction. The authors compared kV/MV reconstructions to kV-only reconstructions using a dental phantom with fillings and a hip-implant numerical phantom. Simulation results indicated that dual-energy sinogram patch FBP and the modified dual-energy PWLS method can successfully suppress metal streak artifacts and restore information lost due to photon starvation in the kV projections. The root-mean-square errors of soft-tissue patterns obtained using combined kV/MV data are 10-15 Hounsfield units smaller than those of the kV-only images, and the structural similarity index measure also indicates a 5%-10% improvement in the image quality. The added dose from the MV scan is much less than the dose from the kV scan if a high efficiency MV detector is assumed. The authors have shown that it is possible to improve the image quality of

  15. Benefits of 20 kHz PMAD in a nuclear space station

    NASA Technical Reports Server (NTRS)

    Sundberg, Gale R.

    1987-01-01

    Compared to existing systems, high frequency ac power provides higher efficiency, lower cost, and improved safety benefits. The 20 kHz power system has exceptional flexibility, is inherently user friendly, and is compatible with all types of energy sources; photovoltaic, solar dynamic, rotating machines and nuclear. A 25 kW, 20 kHz ac power distribution system testbed was recently (1986) developed. The testbed possesses maximum flexibility, versatility, and transparency to user technology while maintaining high efficiency, low mass, and reduced volume. Several aspects of the 20 kHz power management and distribution (PMAD) system that have particular benefits for a nuclear power Space Station are discussed.

  16. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces.

    PubMed

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2008-07-03

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm's behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method.

  17. Space Science Field Workshops for K-12 Teacher-Scientist Teams

    NASA Technical Reports Server (NTRS)

    Thompson, P. B.; Kiefer, W. S.; Treiman, A. H.; Irving, A. J.; Johnson, K. M.

    2002-01-01

    In collaboration with NASA Space Grant Consortia and other partners, we developed workshops for K-12 teachers that involve intensive, direct interaction with scientists. Field trips allow informal and spontaneous interaction, encouraging active participation. Additional information is contained in the original extended abstract.

  18. Computer optimization of reactor-thermoelectric space power systems

    NASA Technical Reports Server (NTRS)

    Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.

    1973-01-01

    A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.

  19. K2 and K2*: efficient alignment-free sequence similarity measurement based on Kendall statistics.

    PubMed

    Lin, Jie; Adjeroh, Donald A; Jiang, Bing-Hua; Jiang, Yue

    2018-05-15

    Alignment-free sequence comparison methods can compute the pairwise similarity between a huge number of sequences much faster than sequence-alignment based methods. We propose a new non-parametric alignment-free sequence comparison method, called K2, based on the Kendall statistics. Comparing to the other state-of-the-art alignment-free comparison methods, K2 demonstrates competitive performance in generating the phylogenetic tree, in evaluating functionally related regulatory sequences, and in computing the edit distance (similarity/dissimilarity) between sequences. Furthermore, the K2 approach is much faster than the other methods. An improved method, K2*, is also proposed, which is able to determine the appropriate algorithmic parameter (length) automatically, without first considering different values. Comparative analysis with the state-of-the-art alignment-free sequence similarity methods demonstrates the superiority of the proposed approaches, especially with increasing sequence length, or increasing dataset sizes. The K2 and K2* approaches are implemented in the R language as a package and is freely available for open access (http://community.wvu.edu/daadjeroh/projects/K2/K2_1.0.tar.gz). yueljiang@163.com. Supplementary data are available at Bioinformatics online.

  20. High-order continuum kinetic method for modeling plasma dynamics in phase space

    DOE PAGES

    Vogman, G. V.; Colella, P.; Shumlak, U.

    2014-12-15

    Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v x,v y) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuummore » finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v r,v z) phase space are presented.« less

  1. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  2. Rater reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS).

    PubMed

    Baker, Nancy A; Cook, James R; Redfern, Mark S

    2009-01-01

    This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.

  3. The ideas of K. E. Tsiolkovsky on orbital space stations

    NASA Technical Reports Server (NTRS)

    Kolchenko, I. A.; Strazheva, I. V.

    1977-01-01

    The concepts presented by K. E. Tsiolkovsky concerning the construction of orbital space stations are cited. Tsiolkovsky, a Russian scientist and founder of astronautics, substantiated these ideas at the end of the 19th and beginning of the 20th century. Considered settlements outside the earth were proposed feasible using solar energy. The substance of numerous asteroids would be used as construction materials for space settlements and rockets. Extraordinary farsightedness was shown by Tsiolkovsky when comparisons of his projects with those of modern orbital stations are made.

  4. Computer-assisted bladder cancer grading: α-shapes for color space decomposition

    NASA Astrophysics Data System (ADS)

    Niazi, M. K. K.; Parwani, Anil V.; Gurcan, Metin N.

    2016-03-01

    According to American Cancer Society, around 74,000 new cases of bladder cancer are expected during 2015 in the US. To facilitate the bladder cancer diagnosis, we present an automatic method to differentiate carcinoma in situ (CIS) from normal/reactive cases that will work on hematoxylin and eosin (H and E) stained images of bladder. The method automatically determines the color deconvolution matrix by utilizing the α-shapes of the color distribution in the RGB color space. Then, variations in the boundary of transitional epithelium are quantified, and sizes of nuclei in the transitional epithelium are measured. We also approximate the "nuclear to cytoplasmic ratio" by computing the ratio of the average shortest distance between transitional epithelium and nuclei to average nuclei size. Nuclei homogeneity is measured by computing the kurtosis of the nuclei size histogram. The results show that 30 out of 34 (88.2%) images were correctly classified by the proposed method, indicating that these novel features are viable markers to differentiate CIS from normal/reactive bladder.

  5. Space-Time Conservation Element and Solution Element Method Being Developed

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Jorgenson, Philip C. E.; Loh, Ching-Yuen; Wang, Xiao-Yen; Yu, Sheng-Tao

    1999-01-01

    The engineering research and design requirements of today pose great computer-simulation challenges to engineers and scientists who are called on to analyze phenomena in continuum mechanics. The future will bring even more daunting challenges, when increasingly complex phenomena must be analyzed with increased accuracy. Traditionally used numerical simulation methods have evolved to their present state by repeated incremental extensions to broaden their scope. They are reaching the limits of their applicability and will need to be radically revised, at the very least, to meet future simulation challenges. At the NASA Lewis Research Center, researchers have been developing a new numerical framework for solving conservation laws in continuum mechanics, namely, the Space-Time Conservation Element and Solution Element Method, or the CE/SE method. This method has been built from fundamentals and is not a modification of any previously existing method. It has been designed with generality, simplicity, robustness, and accuracy as cornerstones. The CE/SE method has thus far been applied in the fields of computational fluid dynamics, computational aeroacoustics, and computational electromagnetics. Computer programs based on the CE/SE method have been developed for calculating flows in one, two, and three spatial dimensions. Results have been obtained for numerous problems and phenomena, including various shock-tube problems, ZND detonation waves, an implosion and explosion problem, shocks over a forward-facing step, a blast wave discharging from a nozzle, various acoustic waves, and shock/acoustic-wave interactions. The method can clearly resolve shock/acoustic-wave interactions, wherein the difference of the magnitude between the acoustic wave and shock could be up to six orders. In two-dimensional flows, the reflected shock is as crisp as the leading shock. CE/SE schemes are currently being used for advanced applications to jet and fan noise prediction and to chemically

  6. Soft Computing Methods for Disulfide Connectivity Prediction.

    PubMed

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  7. Review of Computational Stirling Analysis Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  8. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces

    PubMed Central

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2010-01-01

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method. PMID:20182556

  9. Four dimensional magnetic resonance imaging with retrospective k-space reordering: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilin; Yin, Fang-Fang; Cai, Jing, E-mail: jing.cai@duke.edu

    Purpose: Current four dimensional magnetic resonance imaging (4D-MRI) techniques lack sufficient temporal/spatial resolution and consistent tumor contrast. To overcome these limitations, this study presents the development and initial evaluation of a new strategy for 4D-MRI which is based on retrospective k-space reordering. Methods: We simulated a k-space reordered 4D-MRI on a 4D digital extended cardiac-torso (XCAT) human phantom. A 2D echo planar imaging MRI sequence [frame rate (F) = 0.448 Hz; image resolution (R) = 256 × 256; number of k-space segments (N{sub KS}) = 4] with sequential image acquisition mode was assumed for the simulation. Image quality of themore » simulated “4D-MRI” acquired from the XCAT phantom was qualitatively evaluated, and tumor motion trajectories were compared to input signals. In particular, mean absolute amplitude differences (D) and cross correlation coefficients (CC) were calculated. Furthermore, to evaluate the data sufficient condition for the new 4D-MRI technique, a comprehensive simulation study was performed using 30 cancer patients’ respiratory profiles to study the relationships between data completeness (C{sub p}) and a number of impacting factors: the number of repeated scans (N{sub R}), number of slices (N{sub S}), number of respiratory phase bins (N{sub P}), N{sub KS}, F, R, and initial respiratory phase at image acquisition (P{sub 0}). As a proof-of-concept, we implemented the proposed k-space reordering 4D-MRI technique on a T2-weighted fast spin echo MR sequence and tested it on a healthy volunteer. Results: The simulated 4D-MRI acquired from the XCAT phantom matched closely to the original XCAT images. Tumor motion trajectories measured from the simulated 4D-MRI matched well with input signals (D = 0.83 and 0.83 mm, and CC = 0.998 and 0.992 in superior–inferior and anterior–posterior directions, respectively). The relationship between C{sub p} and N{sub R} was found best represented by an exponential

  10. Study on identifying deciduous forest by the method of feature space transformation

    NASA Astrophysics Data System (ADS)

    Zhang, Xuexia; Wu, Pengfei

    2009-10-01

    The thematic remotely sensed information extraction is always one of puzzling nuts which the remote sensing science faces, so many remote sensing scientists devotes diligently to this domain research. The methods of thematic information extraction include two kinds of the visual interpretation and the computer interpretation, the developing direction of which is intellectualization and comprehensive modularization. The paper tries to develop the intelligent extraction method of feature space transformation for the deciduous forest thematic information extraction in Changping district of Beijing city. The whole Chinese-Brazil resources satellite images received in 2005 are used to extract the deciduous forest coverage area by feature space transformation method and linear spectral decomposing method, and the result from remote sensing is similar to woodland resource census data by Chinese forestry bureau in 2004.

  11. The Information Science Experiment System - The computer for science experiments in space

    NASA Technical Reports Server (NTRS)

    Foudriat, Edwin C.; Husson, Charles

    1989-01-01

    The concept of the Information Science Experiment System (ISES), potential experiments, and system requirements are reviewed. The ISES is conceived as a computer resource in space whose aim is to assist computer, earth, and space science experiments, to develop and demonstrate new information processing concepts, and to provide an experiment base for developing new information technology for use in space systems. The discussion covers system hardware and architecture, operating system software, the user interface, and the ground communication link.

  12. A time-space domain stereo finite difference method for 3D scalar wave propagation

    NASA Astrophysics Data System (ADS)

    Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie

    2016-11-01

    The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).

  13. Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.

  14. Spaceborne computer executive routine functional design specification. Volume 2: Computer executive design for space station/base

    NASA Technical Reports Server (NTRS)

    Kennedy, J. R.; Fitzpatrick, W. S.

    1971-01-01

    The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.

  15. Methods of treating complex space vehicle geometry for charged particle radiation transport

    NASA Technical Reports Server (NTRS)

    Hill, C. W.

    1973-01-01

    Current methods of treating complex geometry models for space radiation transport calculations are reviewed. The geometric techniques used in three computer codes are outlined. Evaluations of geometric capability and speed are provided for these codes. Although no code development work is included several suggestions for significantly improving complex geometry codes are offered.

  16. Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.

    PubMed

    Heikal, A A; Wachowicz, K; Fallone, B G

    2016-10-01

    To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.

  17. A K-6 Computational Thinking Curriculum Framework: Implications for Teacher Knowledge

    ERIC Educational Resources Information Center

    Angeli, Charoula; Voogt, Joke; Fluck, Andrew; Webb, Mary; Cox, Margaret; Malyn-Smith, Joyce; Zagami, Jason

    2016-01-01

    Adding computer science as a separate school subject to the core K-6 curriculum is a complex issue with educational challenges. The authors herein address two of these challenges: (1) the design of the curriculum based on a generic computational thinking framework, and (2) the knowledge teachers need to teach the curriculum. The first issue is…

  18. Evaluating the Theoretic Adequacy and Applied Potential of Computational Models of the Spacing Effect.

    PubMed

    Walsh, Matthew M; Gluck, Kevin A; Gunzelmann, Glenn; Jastrzembski, Tiffany; Krusmark, Michael

    2018-06-01

    The spacing effect is among the most widely replicated empirical phenomena in the learning sciences, and its relevance to education and training is readily apparent. Yet successful applications of spacing effect research to education and training is rare. Computational modeling can provide the crucial link between a century of accumulated experimental data on the spacing effect and the emerging interest in using that research to enable adaptive instruction. In this paper, we review relevant literature and identify 10 criteria for rigorously evaluating computational models of the spacing effect. Five relate to evaluating the theoretic adequacy of a model, and five relate to evaluating its application potential. We use these criteria to evaluate a novel computational model of the spacing effect called the Predictive Performance Equation (PPE). Predictive Performance Equation combines elements of earlier models of learning and memory including the General Performance Equation, Adaptive Control of Thought-Rational, and the New Theory of Disuse, giving rise to a novel computational account of the spacing effect that performs favorably across the complete sets of theoretic and applied criteria. We implemented two other previously published computational models of the spacing effect and compare them to PPE using the theoretic and applied criteria as guides. Copyright © 2018 Cognitive Science Society, Inc.

  19. Design of a Mechanical NaK Pump for Fission Space Power

    NASA Technical Reports Server (NTRS)

    Mireles, Omar R.; Bradley, David E.; Godfroy, Thomas

    2011-01-01

    Alkali liquid metal cooled fission reactor concepts are under development for spaceflight power requirements. One such concept utilizes a sodium-potassium eutectic (NaK) as the primary loop working fluid, which has specific pumping requirements. Traditionally, electromagnetic linear induction pumps have been used to provide the required flow and pressure head conditions for NaK systems but they can be limited in performance, efficiency, and number of available vendors. The objective of the project was to develop a mechanical NaK centrifugal pump that takes advantages of technology advances not available in previous liquid metal mechanical pump designs. This paper details the design, build, and performance test of a mechanical NaK pump developed at NASA Marshall Space Flight Center. The pump was designed to meet reactor cooling requirements using commercially available components modified for high temperature NaK service.

  20. Payette uses computer in the aft FD on Space Shuttle Endeavour

    NASA Image and Video Library

    2009-07-28

    S127-E-011052 (28 July 2009) --- Canadian Space Agency astronaut Julie Payette, STS-127 mission specialist, uses a computer on the flight deck of Space Shuttle Endeavour during flight day 14 activities.

  1. Two pass method and radiation interchange processing when applied to thermal-structural analysis of large space truss structures

    NASA Technical Reports Server (NTRS)

    Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.; Rogers, Karen M.

    1993-01-01

    A method of efficient and automated thermal-structural processing of very large space structures is presented. The method interfaces the finite element and finite difference techniques. It also results in a pronounced reduction of the quantity of computations, computer resources and manpower required for the task, while assuring the desired accuracy of the results.

  2. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  3. Assessing Auditory Discrimination Skill of Malay Children Using Computer-based Method.

    PubMed

    Ting, H; Yunus, J; Mohd Nordin, M Z

    2005-01-01

    The purpose of this paper is to investigate the auditory discrimination skill of Malay children using computer-based method. Currently, most of the auditory discrimination assessments are conducted manually by Speech-Language Pathologist. These conventional tests are actually general tests of sound discrimination, which do not reflect the client's specific speech sound errors. Thus, we propose computer-based Malay auditory discrimination test to automate the whole process of assessment as well as to customize the test according to the specific speech error sounds of the client. The ability in discriminating voiced and unvoiced Malay speech sounds was studied for the Malay children aged between 7 and 10 years old. The study showed no major difficulty for the children in discriminating the Malay speech sounds except differentiating /g/-/k/ sounds. Averagely the children of 7 years old failed to discriminate /g/-/k/ sounds.

  4. A diabetic retinopathy detection method using an improved pillar K-means algorithm.

    PubMed

    Gogula, Susmitha Valli; Divakar, Ch; Satyanarayana, Ch; Rao, Allam Appa

    2014-01-01

    The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage.

  5. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage...

  6. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage...

  7. Computations of Flow over a Hump Model Using Higher Order Method with Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Balakumar, P.

    2005-01-01

    Turbulent separated flow over a two-dimensional hump is computed by solving the RANS equations with k - omega (SST) turbulence model for the baseline, steady suction and oscillatory blowing/suction flow control cases. The flow equations and the turbulent model equations are solved using a fifth-order accurate weighted essentially. nonoscillatory (WENO) scheme for space discretization and a third order, total variation diminishing (TVD) Runge-Kutta scheme for time integration. Qualitatively the computed pressure distributions exhibit the same behavior as those observed in the experiments. The computed separation regions are much longer than those observed experimentally. However, the percentage reduction in the separation region in the steady suction case is closer to what was measured in the experiment. The computations did not predict the expected reduction in the separation length in the oscillatory case. The predicted turbulent quantities are two to three times smaller than the measured values pointing towards the deficiencies in the existing turbulent models when they are applied to strong steady/unsteady separated flows.

  8. Noninvasive methods in space cardiology.

    PubMed

    Baevsky, R M

    1997-01-01

    The development and application of noninvasive methods in space cardiology is discussed. These methods are used in astronautics both to gain new insights into the impact of weightlessness conditions on the human organism and to help solve problems involved in the medical monitoring of space crew members. The cardiovascular system is a major target for the action of microgravity. Noninvasive methods used to examine the cardiovascular system during space flights over the past 30 years are listed. Special attention is given to methods for studying heart rate variability and contactless recording of physiologic functions during night sleep. Analysis of heart rate variability highlights an important principle of space cardiology-gaining the maximum amount of information while recording as little data as possible. With this method, the degree of strain experienced by the systems of autonomic regulation and the adaptational capabilities of the body can be assessed at various stages of a space flight. Discriminant analysis of heart rate variability data enables the psycho-emotional component of stress to be separated from the component associated with the impact of weightlessness. A major advance in space medicine has been the development of techniques for contactless recording of pulse rates, breathing frequency, myocardial contractility, and motor activity during sleep using a sensor installed on the cosmonaut's sleeping bag. The data obtained can be used to study ultradian rhythms, which reflect the activity of higher autonomic centers. An important role of these centers in mobilizing functional reserves of the body to ensure its relatively stable adaptation to weightless conditions is shown.

  9. The computer-communication link for the innovative use of Space Station

    NASA Technical Reports Server (NTRS)

    Carroll, C. C.

    1984-01-01

    The potential capability of the computer-communications system link of space station is related to innovative utilization for industrial applications. Conceptual computer network architectures are presented and their respective accommodation of innovative industrial projects are discussed. To achieve maximum system availability for industrialization is a possible design goal, which would place the industrial community in an interactive mode with facilities in space. A worthy design goal would be to minimize the computer-communication management function and thereby optimize the system availability for industrial users. Quasi-autonomous modes and subnetworks are key design issues, since they would be the system elements directly effecting the system performance for industrial use.

  10. Many States unprepared for 'Y2K' computer woes.

    PubMed

    1998-12-25

    Most computer systems that States use to process health benefits are not Y2K compliant, which could jeopardize Medicaid application processing. This could cause recipients to lose benefits or experience delays in payments. Only seven states have reported that their systems are ready for the year 2000. Contact information is provided.

  11. Likelihood reconstruction method of real-space density and velocity power spectra from a redshift galaxy survey

    NASA Astrophysics Data System (ADS)

    Tang, Jiayu; Kayo, Issha; Takada, Masahiro

    2011-09-01

    We develop a maximum likelihood based method of reconstructing the band powers of the density and velocity power spectra at each wavenumber bin from the measured clustering features of galaxies in redshift space, including marginalization over uncertainties inherent in the small-scale, non-linear redshift distortion, the Fingers-of-God (FoG) effect. The reconstruction can be done assuming that the density and velocity power spectra depend on the redshift-space power spectrum having different angular modulations of μ with μ2n (n= 0, 1, 2) and that the model FoG effect is given as a multiplicative function in the redshift-space spectrum. By using N-body simulations and the halo catalogues, we test our method by comparing the reconstructed power spectra with the spectra directly measured from the simulations. For the spectrum of μ0 or equivalently the density power spectrum Pδδ(k), our method recovers the amplitudes to an accuracy of a few per cent up to k≃ 0.3 h Mpc-1 for both dark matter and haloes. For the power spectrum of μ2, which is equivalent to the density-velocity power spectrum Pδθ(k) in the linear regime, our method can recover, within the statistical errors, the input power spectrum for dark matter up to k≃ 0.2 h Mpc-1 and at both redshifts z= 0 and 1, if the adequate FoG model being marginalized over is employed. However, for the halo spectrum that is least affected by the FoG effect, the reconstructed spectrum shows greater amplitudes than the spectrum Pδθ(k) inferred from the simulations over a range of wavenumbers 0.05 ≤k≤ 0.3 h Mpc-1. We argue that the disagreement may be ascribed to a non-linearity effect that arises from the cross-bispectra of density and velocity perturbations. Using the perturbation theory and assuming Einstein gravity as in simulations, we derive the non-linear correction term to the redshift-space spectrum, and find that the leading-order correction term is proportional to μ2 and increases the μ2-power

  12. A Deterministic Computational Procedure for Space Environment Electron Transport

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.

    2010-01-01

    A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.

  13. Lunar and Planetary Science XXXV: Engaging K-12 Educators, Students, and the General Public in Space Science Exploration

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The session "Engaging K-12 Educators, Students, and the General Public in Space Science Exploration" included the following reports:Training Informal Educators Provides Leverage for Space Science Education and Public Outreach; Teacher Leaders in Research Based Science Education: K-12 Teacher Retention, Renewal, and Involvement in Professional Science; Telling the Tale of Two Deserts: Teacher Training and Utilization of a New Standards-based, Bilingual E/PO Product; Lindstrom M. M. Tobola K. W. Stocco K. Henry M. Allen J. S. McReynolds J. Porter T. T. Veile J. Space Rocks Tell Their Secrets: Space Science Applications of Physics and Chemistry for High School and College Classes -- Update; Utilizing Mars Data in Education: Delivering Standards-based Content by Exposing Educators and Students to Authentic Scientific Opportunities and Curriculum; K. E. Little Elementary School and the Young Astronaut Robotics Program; Integrated Solar System Exploration Education and Public Outreach: Theme, Products and Activities; and Online Access to the NEAR Image Collection: A Resource for Educators and Scientists.

  14. System enhancements of Mesoscale Analysis and Space Sensor (MASS) computer system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.

    1985-01-01

    The interactive information processing for the mesoscale analysis and space sensor (MASS) program is reported. The development and implementation of new spaceborne remote sensing technology to observe and measure atmospheric processes is described. The space measurements and conventional observational data are processed together to gain an improved understanding of the mesoscale structure and dynamical evolution of the atmosphere relative to cloud development and precipitation processes. A Research Computer System consisting of three primary computers was developed (HP-1000F, Perkin-Elmer 3250, and Harris/6) which provides a wide range of capabilities for processing and displaying interactively large volumes of remote sensing data. The development of a MASS data base management and analysis system on the HP-1000F computer and extending these capabilities by integration with the Perkin-Elmer and Harris/6 computers using the MSFC's Apple III microcomputer workstations is described. The objectives are: to design hardware enhancements for computer integration and to provide data conversion and transfer between machines.

  15. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    PubMed

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  16. Delamination detection using methods of computational intelligence

    NASA Astrophysics Data System (ADS)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  17. Advanced Computational Aeroacoustics Methods for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane (Technical Monitor); Tam, Christopher

    2003-01-01

    Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.

  18. Efficient Computation of Anharmonic Force Constants via q-space, with Application to Graphene

    NASA Astrophysics Data System (ADS)

    Kornbluth, Mordechai; Marianetti, Chris

    We present a new approach for extracting anharmonic force constants from a sparse sampling of the anharmonic dynamical tensor. We calculate the derivative of the energy with respect to q-space displacements (phonons) and strain, which guarantees the absence of supercell image errors. Central finite differences provide a well-converged quadratic error tail for each derivative, separating the contribution of each anharmonic order. These derivatives populate the anharmonic dynamical tensor in a sparse mesh that bounds the Brillouin Zone, which ensures comprehensive sampling of q-space while exploiting small-cell calculations for efficient, high-throughput computation. This produces a well-converged and precisely-defined dataset, suitable for big-data approaches. We transform this sparsely-sampled anharmonic dynamical tensor to real-space anharmonic force constants that obey full space-group symmetries by construction. Machine-learning techniques identify the range of real-space interactions. We show the entire process executed for graphene, up to and including the fifth-order anharmonic force constants. This method successfully calculates strain-based phonon renormalization in graphene, even under large strains, which solves a major shortcoming of previous potentials.

  19. Environmental Chemistry Methods (ECM) Index - K

    EPA Pesticide Factsheets

    Laboratories use testing methods to identify pesticides in water and soil. Environmental chemistry methods test soil and water samples to determine the fate of pesticides in the environment. Find methods for chemicals with K as the first character.

  20. A Computational Fluid Dynamic and Heat Transfer Model for Gaseous Core and Gas Cooled Space Power and Propulsion Reactors

    NASA Technical Reports Server (NTRS)

    Anghaie, S.; Chen, G.

    1996-01-01

    A computational model based on the axisymmetric, thin-layer Navier-Stokes equations is developed to predict the convective, radiation and conductive heat transfer in high temperature space nuclear reactors. An implicit-explicit, finite volume, MacCormack method in conjunction with the Gauss-Seidel line iteration procedure is utilized to solve the thermal and fluid governing equations. Simulation of coolant and propellant flows in these reactors involves the subsonic and supersonic flows of hydrogen, helium and uranium tetrafluoride under variable boundary conditions. An enthalpy-rebalancing scheme is developed and implemented to enhance and accelerate the rate of convergence when a wall heat flux boundary condition is used. The model also incorporated the Baldwin and Lomax two-layer algebraic turbulence scheme for the calculation of the turbulent kinetic energy and eddy diffusivity of energy. The Rosseland diffusion approximation is used to simulate the radiative energy transfer in the optically thick environment of gas core reactors. The computational model is benchmarked with experimental data on flow separation angle and drag force acting on a suspended sphere in a cylindrical tube. The heat transfer is validated by comparing the computed results with the standard heat transfer correlations predictions. The model is used to simulate flow and heat transfer under a variety of design conditions. The effect of internal heat generation on the heat transfer in the gas core reactors is examined for a variety of power densities, 100 W/cc, 500 W/cc and 1000 W/cc. The maximum temperature, corresponding with the heat generation rates, are 2150 K, 2750 K and 3550 K, respectively. This analysis shows that the maximum temperature is strongly dependent on the value of heat generation rate. It also indicates that a heat generation rate higher than 1000 W/cc is necessary to maintain the gas temperature at about 3500 K, which is typical design temperature required to achieve high

  1. Standard payload computer for the international space station

    NASA Astrophysics Data System (ADS)

    Knott, Karl; Taylor, Chris; Koenig, Horst; Schlosstein, Uwe

    1999-01-01

    This paper describes the development and application of a Standard PayLoad Computer (SPLC) which is being applied by the majority of ESA payloads accommodated on the International Space Station (ISS). The strategy of adopting of a standard computer leads to a radical rethink in the payload data handling procurement process. Traditionally, this has been based on a proprietary development with repeating costs for qualification, spares, expertise and maintenance for each new payload. Implementations have also tended to be unique with very little opportunity for reuse or utilisation of previous developments. While this may to some extent have been justified for short duration one-off missions, the availability of a standard, long term space infrastructure calls for a quite different approach. To support a large number of concurrent payloads, the ISS implementation relies heavily on standardisation, and this is particularly true in the area of payloads. Physical accommodation, data interfaces, protocols, component quality, operational requirements and maintenance including spares provisioning must all conform to a common set of standards. The data handling system and associated computer used by each payload must also comply with these common requirements, and thus it makes little sense to instigate multiple developments for the same task. The opportunity exists to provide a single computer suitable for all payloads, but with only a one-off development and qualification cost. If this is combined with the benefits of multiple procurement, centralised spares and maintenance, there is potential for great savings to be made by all those concerned in the payload development process. In response to the above drivers, the SPLC is based on the following concepts: • A one-off development and qualification process • A modular computer, configurable according to the payload developer's needs from a list of space-qualified items • An `open system' which may be added to by

  2. Testing the applicability of the k0-NAA method at the MINT's TRIGA MARK II reactor

    NASA Astrophysics Data System (ADS)

    Siong, Wee Boon; Dung, Ho Manh; Wood, Ab. Khalik; Salim, Nazaratul Ashifa Abd.; Elias, Md. Suhaimi

    2006-08-01

    The Analytical Chemistry Laboratory at MINT is using the NAA technique since 1980s and is the only laboratory in Malaysia equipped with a research reactor, namely the TRIGA MARK II. Throughout the years the development of NAA technique has been very encouraging and was made applicable to a wide range of samples. At present, the k0 method has become the preferred standardization method of NAA ( k0-NAA) due to its multi-elemental analysis capability without using standards. Additionally, the k0 method describes NAA in physically and mathematically understandable definitions and is very suitable for computer evaluation. Eventually, the k0-NAA method has been adopted by MINT in 2003, in collaboration with the Nuclear Research Institute (NRI), Vietnam. The reactor neutron parameters ( α and f) for the pneumatic transfer system and for the rotary rack at various locations, as well as the detector efficiencies were determined. After calibration of the reactor and the detectors, the implemented k0 method was validated by analyzing some certified reference materials (including IAEA Soil 7, NIST 1633a, NIST 1632c, NIST 1646a and IAEA 140/TM). The analysis results of the CRMs showed an average u score well below the threshold value of 2 with a precision of better than ±10% for most of the elemental concentrations obtained, validating herewith the introduction of the k0-NAA method at the MINT.

  3. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †

    PubMed Central

    Murdani, Muhammad Harist; Hong, Bonghee

    2018-01-01

    In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366

  4. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.

    PubMed

    Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee

    2018-03-24

    In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.

  5. Short-range density functional correlation within the restricted active space CI method

    NASA Astrophysics Data System (ADS)

    Casanova, David

    2018-03-01

    In the present work, I introduce a hybrid wave function-density functional theory electronic structure method based on the range separation of the electron-electron Coulomb operator in order to recover dynamic electron correlations missed in the restricted active space configuration interaction (RASCI) methodology. The working equations and the computational algorithm for the implementation of the new approach, i.e., RAS-srDFT, are presented, and the method is tested in the calculation of excitation energies of organic molecules. The good performance of the RASCI wave function in combination with different short-range exchange-correlation functionals in the computation of relative energies represents a quantitative improvement with respect to the RASCI results and paves the path for the development of RAS-srDFT as a promising scheme in the computation of the ground and excited states where nondynamic and dynamic electron correlations are important.

  6. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  7. Space-time least-squares finite element method for convection-reaction system with transformed variables

    PubMed Central

    Nam, Jaewook

    2011-01-01

    We present a method to solve a convection-reaction system based on a least-squares finite element method (LSFEM). For steady-state computations, issues related to recirculation flow are stated and demonstrated with a simple example. The method can compute concentration profiles in open flow even when the generation term is small. This is the case for estimating hemolysis in blood. Time-dependent flows are computed with the space-time LSFEM discretization. We observe that the computed hemoglobin concentration can become negative in certain regions of the flow; it is a physically unacceptable result. To prevent this, we propose a quadratic transformation of variables. The transformed governing equation can be solved in a straightforward way by LSFEM with no sign of unphysical behavior. The effect of localized high shear on blood damage is shown in a circular Couette-flow-with-blade configuration, and a physiological condition is tested in an arterial graft flow. PMID:21709752

  8. A Review of Resources for Evaluating K-12 Computer Science Education Programs

    ERIC Educational Resources Information Center

    Randolph, Justus J.; Hartikainen, Elina

    2004-01-01

    Since computer science education is a key to preparing students for a technologically-oriented future, it makes sense to have high quality resources for conducting summative and formative evaluation of those programs. This paper describes the results of a critical analysis of the resources for evaluating K-12 computer science education projects.…

  9. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    ERIC Educational Resources Information Center

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  10. Applications of asynoptic space - Time Fourier transform methods to scanning satellite measurements

    NASA Technical Reports Server (NTRS)

    Lait, Leslie R.; Stanford, John L.

    1988-01-01

    A method proposed by Salby (1982) for computing the zonal space-time Fourier transform of asynoptically acquired satellite data is discussed. The method and its relationship to other techniques are briefly described, and possible problems in applying it to real data are outlined. Examples of results obtained using this technique are given which demonstrate its sensitivity to small-amplitude signals. A number of waves are found which have previously been observed as well as two not heretofore reported. A possible extension of the method which could increase temporal and longitudinal resolution is described.

  11. Introducing Children to Space, the Lincoln Plan. A Space Handbook for Teachers Grades K through 6.

    ERIC Educational Resources Information Center

    Watkins, Steven N.

    This handbook for space science was developed for use by elementary school teachers of grades K-6. The instructional plan of this guide presents activities for students of various maturity levels--five through eleven years. Teachers are encouraged to use the materials to meet the needs of individuals in the class. Most of the activities included…

  12. Initial alignment method for free space optics laser beam

    NASA Astrophysics Data System (ADS)

    Shimada, Yuta; Tashiro, Yuki; Izumi, Kiyotaka; Yoshida, Koichi; Tsujimura, Takeshi

    2016-08-01

    The authors have newly proposed and constructed an active free space optics transmission system. It is equipped with a motor driven laser emitting mechanism and positioning photodiodes, and it transmits a collimated thin laser beam and accurately steers the laser beam direction. It is necessary to introduce the laser beam within sensible range of the receiver in advance of laser beam tracking control. This paper studies an estimation method of laser reaching point for initial laser beam alignment. Distributed photodiodes detect laser luminescence at respective position, and the optical axis of laser beam is analytically presumed based on the Gaussian beam optics. Computer simulation evaluates the accuracy of the proposed estimation methods, and results disclose that the methods help us to guide the laser beam to a distant receiver.

  13. Universal computer control system (UCCS) for space telerobots

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Szakaly, Zoltan

    1987-01-01

    A universal computer control system (UCCS) is under development for all motor elements of a space telerobot. The basic hardware architecture and software design of UCCS are described, together with the rich motor sensing, control, and self-test capabilities of this all-computerized motor control system. UCCS is integrated into a multibus computer environment with direct interface to higher level control processors, uses pulsewidth multiplier power amplifiers, and one unit can control up to sixteen different motors simultaneously at a high I/O rate. UCCS performance capabilities are illustrated by a few data.

  14. Improved venous suppression on renal MR angiography with recessed elliptical centric ordering of K-space.

    PubMed

    Ho, Bernard; Chao, Minh; Zhang, Hong Lei; Watts, Richard; Prince, Martin R

    2003-01-01

    To evaluate recessed elliptical centric ordering of k-space in renal magnetic resonance (MR) angiography. All imaging was performed on the same 1.5 T MR imaging system (GE Signa CVi) using the body coil for signal transmission and a phased array coil for reception. Gd, 30 ml, was injected manually at 2 ml/sec timed with automatic triggering (SmartPrep). In thirty patients using standard elliptical centric ordering, the scanner paused 8 seconds between detection of the leading edge of the Gd bolus and initiation of scanning beginning with the center of k-space. For the recessed-elliptical centric ordering in 20 consecutive patients, this delay was reduced to 4 seconds but the absolute center of k-space recessed in by 4 seconds such that in all patients the absolute center of k-space was acquired 8 seconds after detecting the leading edge of the bolus. On the arterial phase images signal-to-noise ratio (SNR) was measured in the aorta, each renal artery and vein and contrast-to-noise ratio (CNR) was measured relative to subcutaneous fat. The standard deviation of signal outside the patient was considered to be "noise" for calculation of SNR and CNR. Incidence of ringing artifact in the aorta and renal veins was noted. Aorta SNR and CNR was significantly higher with the recessed technique (p = 0.02) and the ratio of renal artery signal to renal vein signal was higher with the recessed technique, 4 ± 2, compared to standard elliptical centric, 3 ± 2 (p = 0.03). Ringing artifact was also reduced with the recessed technique in both the aorta and renal veins. Gadolinium-enhanced renal MR angiography is improved by recessing the absolute center of k-space.

  15. Computer-Based Radiographic Quantification of Joint Space Narrowing Progression Using Sequential Hand Radiographs: Validation Study in Rheumatoid Arthritis Patients from Multiple Institutions.

    PubMed

    Ichikawa, Shota; Kamishima, Tamotsu; Sutherland, Kenneth; Fukae, Jun; Katayama, Kou; Aoki, Yuko; Okubo, Takanobu; Okino, Taichi; Kaneda, Takahiko; Takagi, Satoshi; Tanimura, Kazuhide

    2017-10-01

    We have developed a refined computer-based method to detect joint space narrowing (JSN) progression with the joint space narrowing progression index (JSNPI) by superimposing sequential hand radiographs. The purpose of this study is to assess the validity of a computer-based method using images obtained from multiple institutions in rheumatoid arthritis (RA) patients. Sequential hand radiographs of 42 patients (37 females and 5 males) with RA from two institutions were analyzed by a computer-based method and visual scoring systems as a standard of reference. The JSNPI above the smallest detectable difference (SDD) defined JSN progression on the joint level. The sensitivity and specificity of the computer-based method for JSN progression was calculated using the SDD and a receiver operating characteristic (ROC) curve. Out of 314 metacarpophalangeal joints, 34 joints progressed based on the SDD, while 11 joints widened. Twenty-one joints progressed in the computer-based method, 11 joints in the scoring systems, and 13 joints in both methods. Based on the SDD, we found lower sensitivity and higher specificity with 54.2 and 92.8%, respectively. At the most discriminant cutoff point according to the ROC curve, the sensitivity and specificity was 70.8 and 81.7%, respectively. The proposed computer-based method provides quantitative measurement of JSN progression using sequential hand radiographs and may be a useful tool in follow-up assessment of joint damage in RA patients.

  16. Computer simulation of space charge

    NASA Astrophysics Data System (ADS)

    Yu, K. W.; Chung, W. K.; Mak, S. S.

    1991-05-01

    Using the particle-mesh (PM) method, a one-dimensional simulation of the well-known Langmuir-Child's law is performed on an INTEL 80386-based personal computer system. The program is coded in turbo basic (trademark of Borland International, Inc.). The numerical results obtained were in excellent agreement with theoretical predictions and the computational time required is quite modest. This simulation exercise demonstrates that some simple computer simulation using particles may be implemented successfully on PC's that are available today, and hopefully this will provide the necessary incentives for newcomers to the field who wish to acquire a flavor of the elementary aspects of the practice.

  17. Computers for Manned Space Applications Base on Commercial Off-the-Shelf Components

    NASA Astrophysics Data System (ADS)

    Vogel, T.; Gronowski, M.

    2009-05-01

    Similar to the consumer markets there has been an ever increasing demand in processing power, signal processing capabilities and memory space also for computers used for science data processing in space. An important driver of this development have been the payload developers for the International Space Station, requesting high-speed data acquisition and fast control loops in increasingly complex systems. Current experiments now even perform video processing and compression with their payload controllers. Nowadays the requirements for a space qualified computer are often far beyond the capabilities of, for example, the classic SPARC architecture that is found in ERC32 or LEON CPUs. An increase in performance usually demands costly and power consuming application specific solutions. Continuous developments over the last few years have now led to an alternative approach that is based on complete electronics modules manufactured for commercial and industrial customers. Computer modules used in industrial environments with a high demand for reliability under harsh environmental conditions like chemical reactors, electrical power plants or on manufacturing lines are entered into a selection procedure. Promising candidates then undergo a detailed characterisation process developed by Astrium Space Transportation. After thorough analysis and some modifications, these modules can replace fully qualified custom built electronics in specific, although not safety critical applications in manned space. This paper focuses on the benefits of COTS1 based electronics modules and the necessary analyses and modifications for their utilisation in manned space applications on the ISS. Some considerations regarding overall systems architecture will also be included. Furthermore this paper will also pinpoint issues that render such modules unsuitable for specific tasks, and justify the reasons. Finally, the conclusion of this paper will advocate the implementation of COTS based

  18. Applications of spectral methods to turbulent magnetofluids in space and fusion research

    NASA Technical Reports Server (NTRS)

    Montgomery, D.; Voigt, R. G. (Editor); Gottlieb, D. (Editor); Hussaini, M. Y. (Editor)

    1984-01-01

    Recent and potential applications of spectral method computation to incompressible, dissipative magnetohydrodynamics are surveyed. Linear stability problems for one dimensional, quasi-equilibria are approachable through a close analogue of the Orr-Sommerfeld equation. It is likely that for Reynolds-like numbers above certain as-yet-undetermined thresholds, all magnetofluids are turbulent. Four recent effects in MHD turbulence are remarked upon, as they have displayed themselves in spectral method computations: (1) inverse cascades; (2) small-scale intermittent dissipative structures; (3) selective decays of ideal global invariants relative to each other; and (4) anisotropy induced by a mean dc magnetic field. Two more conjectured applications are suggested. All the turbulent processes discussed are sometimes involved in current carrying confined fusion magnetoplasmas and in space plasmas.

  19. Computation of tightly-focused laser beams in the FDTD method

    PubMed Central

    Çapoğlu, İlker R.; Taflove, Allen; Backman, Vadim

    2013-01-01

    We demonstrate how a tightly-focused coherent TEMmn laser beam can be computed in the finite-difference time-domain (FDTD) method. The electromagnetic field around the focus is decomposed into a plane-wave spectrum, and approximated by a finite number of plane waves injected into the FDTD grid using the total-field/scattered-field (TF/SF) method. We provide an error analysis, and guidelines for the discrete approximation. We analyze the scattering of the beam from layered spaces and individual scatterers. The described method should be useful for the simulation of confocal microscopy and optical data storage. An implementation of the method can be found in our free and open source FDTD software (“Angora”). PMID:23388899

  20. Computation of tightly-focused laser beams in the FDTD method.

    PubMed

    Capoğlu, Ilker R; Taflove, Allen; Backman, Vadim

    2013-01-14

    We demonstrate how a tightly-focused coherent TEMmn laser beam can be computed in the finite-difference time-domain (FDTD) method. The electromagnetic field around the focus is decomposed into a plane-wave spectrum, and approximated by a finite number of plane waves injected into the FDTD grid using the total-field/scattered-field (TF/SF) method. We provide an error analysis, and guidelines for the discrete approximation. We analyze the scattering of the beam from layered spaces and individual scatterers. The described method should be useful for the simulation of confocal microscopy and optical data storage. An implementation of the method can be found in our free and open source FDTD software ("Angora").

  1. Split Space-Marching Finite-Volume Method for Chemically Reacting Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Rizzi, Arthur W.; Bailey, Harry E.

    1976-01-01

    A space-marching finite-volume method employing a nonorthogonal coordinate system and using a split differencing scheme for calculating steady supersonic flow over aerodynamic shapes is presented. It is a second-order-accurate mixed explicit-implicit procedure that solves the inviscid adiabatic and nondiffusive equations for chemically reacting flow in integral conservation-law form. The relationship between the finite-volume and differential forms of the equations is examined and the relative merits of each discussed. The method admits initial Cauchy data situated on any arbitrary surface and integrates them forward along a general curvilinear coordinate, distorting and deforming the surface as it advances. The chemical kinetics term is split from the convective terms which are themselves dimensionally split, thereby freeing the fluid operators from the restricted step size imposed by the chemical reactions and increasing the computational efficiency. The accuracy of this splitting technique is analyzed, a sufficient stability criterion is established, a representative flow computation is discussed, and some comparisons are made with another method.

  2. Determining Metacarpophalangeal Flexion Angle Tolerance for Reliable Volumetric Joint Space Measurements by High-resolution Peripheral Quantitative Computed Tomography.

    PubMed

    Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl

    2016-10-01

    The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.

  3. Rapid Transient Pressure Field Computations in the Nearfield of Circular Transducers using Frequency Domain Time-Space Decomposition

    PubMed Central

    Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.

    2013-01-01

    The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476

  4. Computer study of emergency shutdowns of a 60-kilowatt reactor Brayton space power system

    NASA Technical Reports Server (NTRS)

    Tew, R. C.; Jefferies, K. S.

    1974-01-01

    A digital computer study of emergency shutdowns of a 60-kWe reactor Brayton power system was conducted. Malfunctions considered were (1) loss of reactor coolant flow, (2) loss of Brayton system gas flow, (3)turbine overspeed, and (4) a reactivity insertion error. Loss of reactor coolant flow was the most serious malfunction for the reactor. Methods for moderating the reactor transients due to this malfunction are considered.

  5. Overdetermined shooting methods for computing standing water waves with spectral accuracy

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon; Yu, Jia

    2012-01-01

    A high-performance shooting algorithm is developed to compute time-periodic solutions of the free-surface Euler equations with spectral accuracy in double and quadruple precision. The method is used to study resonance and its effect on standing water waves. We identify new nucleation mechanisms in which isolated large-amplitude solutions, and closed loops of such solutions, suddenly exist for depths below a critical threshold. We also study degenerate and secondary bifurcations related to Wilton's ripples in the traveling case, and explore the breakdown of self-similarity at the crests of extreme standing waves. In shallow water, we find that standing waves take the form of counter-propagating solitary waves that repeatedly collide quasi-elastically. In deep water with surface tension, we find that standing waves resemble counter-propagating depression waves. We also discuss the existence and non-uniqueness of solutions, and smooth versus erratic dependence of Fourier modes on wave amplitude and fluid depth. In the numerical method, robustness is achieved by posing the problem as an overdetermined nonlinear system and using either adjoint-based minimization techniques or a quadratically convergent trust-region method to minimize the objective function. Efficiency is achieved in the trust-region approach by parallelizing the Jacobian computation, so the setup cost of computing the Dirichlet-to-Neumann operator in the variational equation is not repeated for each column. Updates of the Jacobian are also delayed until the previous Jacobian ceases to be useful. Accuracy is maintained using spectral collocation with optional mesh refinement in space, a high-order Runge-Kutta or spectral deferred correction method in time and quadruple precision for improved navigation of delicate regions of parameter space as well as validation of double-precision results. Implementation issues for transferring much of the computation to a graphic processing units are briefly discussed

  6. IRFK2D: a computer program for simulating intrinsic random functions of order k

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Dowd, Peter A.

    2003-07-01

    IRFK2D is an ANSI Fortran-77 program that generates realizations of an intrinsic function of order k (with k equal to 0, 1 or 2) with a permissible polynomial generalized covariance model. The realizations may be non-conditional or conditioned to the experimental data. The turning bands method is used to generate realizations in 2D and 3D from simulations of an intrinsic random function of order k along lines that span the 2D or 3D space. The program generates two output files, the first containing the simulated values and the second containing the theoretical generalized variogram for different directions together with the theoretical model. The experimental variogram is calculated from the simulated values while the theoretical variogram is the specified generalized covariance model. The generalized variogram is used to assess the quality of the simulation as measured by the extent to which the generalized covariance is reproduced by the simulation. The examples given in this paper indicate that IRFK2D is an efficient implementation of the methodology.

  7. Developing the human-computer interface for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Holden, Kritina L.

    1991-01-01

    For the past two years, the Human-Computer Interaction Laboratory (HCIL) at the Johnson Space Center has been involved in prototyping and prototype reviews of in support of the definition phase of the Space Station Freedom program. On the Space Station, crew members will be interacting with multi-monitor workstations where interaction with several displays at one time will be common. The HCIL has conducted several experiments to begin to address design issues for this complex system. Experiments have dealt with design of ON/OFF indicators, the movement of the cursor across multiple monitors, and the importance of various windowing capabilities for users performing multiple tasks simultaneously.

  8. Automated mapping of the ocean floor using the theory of intrinsic random functions of order k

    USGS Publications Warehouse

    David, M.; Crozel, D.; Robb, James M.

    1986-01-01

    High-quality contour maps can be computer drawn from single track echo-sounding data by combining Universal Kriging and the theory of intrinsic random function of order K (IRFK). These methods interpolate values among the closely spaced points that lie along relatively widely spaced lines. The technique provides a variance which can be contoured as a quantitative measure of map precision. The technique can be used to evaluate alternative survey trackline configurations and data collection intervals, and can be applied to other types of oceanographic data. ?? 1986 D. Reidel Publishing Company.

  9. A study of numerical methods for computing reentry trajectories for shuttle-type space vehicles

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The reuseable exterior insulation system (REI) is studied to determine the optimal reentry trajectory for a space shuttle, which minimizes the heat input to the fuselage. The REI is composed of titanium, covered by a surface insulation material. The method of perturbation functions was used to generate the trajectories, and proved to be an effective technique for generating families of solutions, once an initial trajectory has been obtained.

  10. Comparison of different methods to compute a preliminary orbit of Space Debris using radar observations

    NASA Astrophysics Data System (ADS)

    Ma, Hélène; Gronchi, Giovanni F.

    2014-07-01

    We advertise a new method of preliminary orbit determination for space debris using radar observations, which we call Infang †. We can perform a linkage of two sets of four observations collected at close times. The context is characterized by the accuracy of the range ρ, whereas the right ascension α and the declination δ are much more inaccurate due to observational errors. This method can correct α, δ, assuming the exact knowledge of the range ρ. Considering no perturbations from the J 2 effect, but including errors in the observations, we can compare the new method, the classical method of Gibbs, and the more recent Keplerian integrals method. The development of Infang is still on-going and will be further improved and tested.

  11. Space shuttle environmental and thermal control life support system computer program

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A computer program for the design and operation of the space shuttle environmental and thermal control life support system is presented. The subjects discussed are: (1) basic optimization program, (2) off design performance, (3) radiator/evaporator expendable usage, (4) component weights, and (5) computer program operating procedures.

  12. Numerical treatment for solving two-dimensional space-fractional advection-dispersion equation using meshless method

    NASA Astrophysics Data System (ADS)

    Cheng, Rongjun; Sun, Fengxin; Wei, Qi; Wang, Jufeng

    2018-02-01

    Space-fractional advection-dispersion equation (SFADE) can describe particle transport in a variety of fields more accurately than the classical models of integer-order derivative. Because of nonlocal property of integro-differential operator of space-fractional derivative, it is very challenging to deal with fractional model, and few have been reported in the literature. In this paper, a numerical analysis of the two-dimensional SFADE is carried out by the element-free Galerkin (EFG) method. The trial functions for the SFADE are constructed by the moving least-square (MLS) approximation. By the Galerkin weak form, the energy functional is formulated. Employing the energy functional minimization procedure, the final algebraic equations system is obtained. The Riemann-Liouville operator is discretized by the Grünwald formula. With center difference method, EFG method and Grünwald formula, the fully discrete approximation schemes for SFADE are established. Comparing with exact results and available results by other well-known methods, the computed approximate solutions are presented in the format of tables and graphs. The presented results demonstrate the validity, efficiency and accuracy of the proposed techniques. Furthermore, the error is computed and the proposed method has reasonable convergence rates in spatial and temporal discretizations.

  13. Spots and activity of Pleiades stars from observations with the Kepler Space Telescope (K2)

    NASA Astrophysics Data System (ADS)

    Savanov, I. S.; Dmitrienko, E. S.

    2017-11-01

    Observations of the K2 continuation of Kepler Space Telescope program are used to estimate the spot coverage S (the fractional spotted area on the surface of an active star) for stars of the Pleiades cluster. The analysis is based on data on photometric variations of 759 confirmed clustermembers, together with their atmospheric parameters, masses, and rotation periods. The relationship between the activity ( S) of these Pleiades stars and their effective temperatures shows considerable change in S for stars with temperatures T eff less than 6100 K (this can be considered the limiting value for which spot formation activity begins) and a monotonic increase in S for cooler objects (a change in the slope for stars with Teff 3700 K). The scatter in this parameter ΔS about its mean dependence on the (V -Ks)0 color index remains approximately the same over the entire ( V- K s )0 range, including cool, fully convective dwarfs. The computated S values do not indicate differences between slowly rotating and rapidly rotating stars with color indices 1.1 < ( V- K s )0 < 3.7. The main results of this study include measurements of the activity of a large number of stars having the same age (759 members of the Pleiades cluster), resulting in the first determination of the relationship between the spot-forming activity and masses of stars. For 27 stars with masses differing from the solarmass by nomore than 0.1 M⊙, themean spot coverage is S = 0.031±0.003, suggesting that the activity of candidate young Suns is more pronounced than that of the present-day Sun. These stars rotate considerably faster than the Sun, with an average rotation period of 4.3d. The results of this study of cool, low-mass dwarfs of the Pleiades cluster are compared to results from an earlier study of 1570 M stars.

  14. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems

    NASA Astrophysics Data System (ADS)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-01

    We report a new limitation on the ability of physical systems to perform computation—one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system—such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  15. ESF-X: a low-cost modular experiment computer for space flight experiments

    NASA Astrophysics Data System (ADS)

    Sell, Steven; Zapetis, Joseph; Littlefield, Jim; Vining, Joanne

    2004-08-01

    The high cost associated with spaceflight research often compels experimenters to scale back their research goals significantly purely for budgetary reasons; among experiment systems, control and data collection electronics are a major contributor to total project cost. ESF-X was developed as an architecture demonstration in response to this need: it is a highly capable, radiation-protected experiment support computer, designed to be configurable on demand to each investigator's particular experiment needs, and operational in LEO for missions lasting up to several years (e.g., ISS EXPRESS) without scheduled service or maintenance. ESF-X can accommodate up to 255 data channels (I/O, A/D, D/A, etc.), allocated per customer request, with data rates up to 40kHz. Additionally, ESF-X can be programmed using the graphical block-diagram based programming languages Simulink and MATLAB. This represents a major cost saving opportunity for future investigators, who can now obtain a customized, space-qualified experiment controller at steeply reduced cost compared to 'new' design, and without the performance compromises associated with using preexisting 'generic' systems. This paper documents the functional benchtop prototype, which utilizes a combination of COTS and space-qualified components, along with unit-gravity-specific provisions appropriate to laboratory environment evaluation of the ESF-X design concept and its physical implementation.

  16. A 100 kW-Class Technology Demonstrator for Space Solar Power

    NASA Astrophysics Data System (ADS)

    Howell, J.; Carrington, C.; Day, G.

    2004-12-01

    A first step in the development of solar power from space is the flight demonstration of critical technologies. These fundamental technologies include efficient solar power collection and generation, power management and distribution, and thermal management. In addition, the integration and utilization of these technologies into a viable satellite bus could provide an energy-rich platform for a portfolio of payload experiments such as wireless power transmission (WPT). This paper presents the preliminary design of a concept for a 100 kW-class free-flying platform suitable for flight demonstration of Space Solar Power (SSP) technology experiments.

  17. Signal Space Separation Method for a Biomagnetic Sensor Array Arranged on a Flat Plane for Magnetocardiographic Applications: A Computer Simulation Study

    PubMed Central

    2018-01-01

    Although the signal space separation (SSS) method can successfully suppress interference/artifacts overlapped onto magnetoencephalography (MEG) signals, the method is considered inapplicable to data from nonhelmet-type sensor arrays, such as the flat sensor arrays typically used in magnetocardiographic (MCG) applications. This paper shows that the SSS method is still effective for data measured from a (nonhelmet-type) array of sensors arranged on a flat plane. By using computer simulations, it is shown that the optimum location of the origin can be determined by assessing the dependence of signal and noise gains of the SSS extractor on the origin location. The optimum values of the parameters LC and LD, which, respectively, indicate the truncation values of the multipole-order ℓ of the internal and external subspaces, are also determined by evaluating dependences of the signal, noise, and interference gains (i.e., the shield factor) on these parameters. The shield factor exceeds 104 for interferences originating from fairly distant sources. However, the shield factor drops to approximately 100 when calibration errors of 0.1% exist and to 30 when calibration errors of 1% exist. The shielding capability can be significantly improved using vector sensors, which measure the x, y, and z components of the magnetic field. With 1% calibration errors, a vector sensor array still maintains a shield factor of approximately 500. It is found that the SSS application to data from flat sensor arrays causes a distortion in the signal magnetic field, but it is shown that the distortion can be corrected by using an SSS-modified sensor lead field in the voxel space analysis. PMID:29854364

  18. A comparison of latent class, K-means, and K-median methods for clustering dichotomous data.

    PubMed

    Brusco, Michael J; Shireman, Emilie; Steinley, Douglas

    2017-09-01

    The problem of partitioning a collection of objects based on their measurements on a set of dichotomous variables is a well-established problem in psychological research, with applications including clinical diagnosis, educational testing, cognitive categorization, and choice analysis. Latent class analysis and K-means clustering are popular methods for partitioning objects based on dichotomous measures in the psychological literature. The K-median clustering method has recently been touted as a potentially useful tool for psychological data and might be preferable to its close neighbor, K-means, when the variable measures are dichotomous. We conducted simulation-based comparisons of the latent class, K-means, and K-median approaches for partitioning dichotomous data. Although all 3 methods proved capable of recovering cluster structure, K-median clustering yielded the best average performance, followed closely by latent class analysis. We also report results for the 3 methods within the context of an application to transitive reasoning data, in which it was found that the 3 approaches can exhibit profound differences when applied to real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Noise Computation of a Shock-Containing Supersonic Axisymmetric Jet by the CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Hultgren, Lennart S.; Chang, Sin-Chung; Jorgenson, Philip C. E.

    1999-01-01

    The space-time conservation element solution element (CE/SE) method is employed to numerically study the near-field of a typical under-expanded jet. For the computed case-a circular jet with Mach number M(j) = 1.19-the shock-cell structure is in good agreement with experimental results. The computed noise field is in general agreement with the experiment, although further work is needed to properly close the screech feedback loop.

  20. Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.

    PubMed

    Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene

    2016-11-01

    Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. The reduced space Sequential Quadratic Programming (SQP) method for calculating the worst resonance response of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao; Wu, Wenwang; Fang, Daining

    2018-07-01

    A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.

  2. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less

  3. An FPGA computing demo core for space charge simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less

  4. Transport methods and interactions for space radiations

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Schimmerling, Walter S.; Khandelwal, Govind S.; Khan, Ferdous S.; Nealy, John E.; Cucinotta, Francis A.; Simonsen, Lisa C.; Shinn, Judy L.; Norbury, John W.

    1991-01-01

    A review of the program in space radiation protection at the Langley Research Center is given. The relevant Boltzmann equations are given with a discussion of approximation procedures for space applications. The interaction coefficients are related to solution of the many-body Schroedinger equation with nuclear and electromagnetic forces. Various solution techniques are discussed to obtain relevant interaction cross sections with extensive comparison with experiments. Solution techniques for the Boltzmann equations are discussed in detail. Transport computer code validation is discussed through analytical benchmarking, comparison with other codes, comparison with laboratory experiments and measurements in space. Applications to lunar and Mars missions are discussed.

  5. An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun

    2014-05-01

    Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.

  6. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  7. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  8. Demonstration of a high-capacity turboalternator for a 20 K, 20 W space-borne Brayton cryocooler

    NASA Astrophysics Data System (ADS)

    Zagarola, M.; Cragin, K.; Deserranno, D.

    2014-01-01

    NASA is considering multiple missions involving long-term cryogenic propellant storage in space. Liquid hydrogen and oxygen are the typical cryogens as they provide the highest specific impulse of practical chemical propellants. Storage temperatures are nominally 20 K for liquid hydrogen and 90 K for liquid oxygen. Heat loads greater than 10 W at 20 K are predicted for hydrogen storage. Current space cryocoolers have been developed for sensor cooling with refrigeration capacities less than 1 W at 20 K. In 2011, Creare Inc. demonstrated an ultra-low-capacity turboalternator for use in a turbo-Brayton cryocooler. The turboalternator produced up to 5 W of turbine refrigeration at 20 K; equivalent to approximately 3 W of net cryocooler refrigeration. This turboalternator obtained unprecedented operating speeds and efficiencies at low temperatures benefitting from new rotor design and fabrication techniques, and new bearing fabrication techniques. More recently, Creare applied these design and fabrication techniques to a larger and higher capacity 20 K turboalternator. The turboalternator was tested in a high-capacity, low temperature test facility at Creare and demonstrated up to 42 W of turbine refrigeration at 20 K; equivalent to approximately 30 W of net cryocooler refrigeration. The net turbine efficiency was the highest achieved to date at Creare for a space-borne turboalternator. This demonstration was the first step in the development of a high-capacity turbo-Brayton cryocooler for liquid hydrogen storage. In this paper, we will review the design, development and testing of the turboalternator.

  9. SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space

    PubMed Central

    Lustig, Michael; Pauly, John M.

    2010-01-01

    A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790

  10. Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver

    NASA Technical Reports Server (NTRS)

    Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)

    2002-01-01

    The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.

  11. Fast polarimetric dehazing method for visibility enhancement in HSI colour space

    NASA Astrophysics Data System (ADS)

    Zhang, Wenfei; Liang, Jian; Ren, Liyong; Ju, Haijuan; Bai, Zhaofeng; Wu, Zhaoxin

    2017-09-01

    Image haze removal has attracted much attention in optics and computer vision fields in recent years due to its wide applications. In particular, the fast and real-time dehazing methods are of significance. In this paper, we propose a fast dehazing method in hue, saturation and intensity colour space based on the polarimetric imaging technique. We implement the polarimetric dehazing method in the intensity channel, and the colour distortion of the image is corrected using the white patch retinex method. This method not only reserves the detailed information restoration capacity, but also improves the efficiency of the polarimetric dehazing method. Comparison studies with state of the art methods demonstrate that the proposed method obtains equal or better quality results and moreover the implementation is much faster. The proposed method is promising in real-time image haze removal and video haze removal applications.

  12. A Novel Method Using Abstract Convex Underestimation in Ab-Initio Protein Structure Prediction for Guiding Search in Conformational Feature Space.

    PubMed

    Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng

    2016-01-01

    To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more

  13. Integrable deformations of the Gk1 ×Gk2 /Gk1+k2 coset CFTs

    NASA Astrophysics Data System (ADS)

    Sfetsos, Konstantinos; Siampos, Konstantinos

    2018-02-01

    We study the effective action for the integrable λ-deformation of the Gk1 ×Gk2 /Gk1+k2 coset CFTs. For unequal levels theses models do not fall into the general discussion of λ-deformations of CFTs corresponding to symmetric spaces and have many attractive features. We show that the perturbation is driven by parafermion bilinears and we revisit the derivation of their algebra. We uncover a non-trivial symmetry of these models parametric space, which has not encountered before in the literature. Using field theoretical methods and the effective action we compute the exact in the deformation parameter β-function and explicitly demonstrate the existence of a fixed point in the IR corresponding to the Gk1-k2 ×Gk2 /Gk1 coset CFTs. The same result is verified using gravitational methods for G = SU (2). We examine various limiting cases previously considered in the literature and found agreement.

  14. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    NASA Astrophysics Data System (ADS)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  15. Human-computer interface

    DOEpatents

    Anderson, Thomas G.

    2004-12-21

    The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

  16. Description of A 2.3 kW power transformer for space applications

    NASA Technical Reports Server (NTRS)

    Hansen, I.

    1979-01-01

    The paper describes the principal features and special testing of a high-frequency high-power low-specific-weight (0.57 kg/kW) 2.3-kW electronic power transformer developed for space applications. The transformer is operated in a series resonant inverter supplying beam power to a 30-cm mercury ion thruster. High efficiency (above 98.5%) is obtained through careful detailed design. A number of unique heat removal techniques are discussed which control the winding temperature using only the available conductive cooling.

  17. pK(A) in proteins solving the Poisson-Boltzmann equation with finite elements.

    PubMed

    Sakalli, Ilkay; Knapp, Ernst-Walter

    2015-11-05

    Knowledge on pK(A) values is an eminent factor to understand the function of proteins in living systems. We present a novel approach demonstrating that the finite element (FE) method of solving the linearized Poisson-Boltzmann equation (lPBE) can successfully be used to compute pK(A) values in proteins with high accuracy as a possible replacement to finite difference (FD) method. For this purpose, we implemented the software molecular Finite Element Solver (mFES) in the framework of the Karlsberg+ program to compute pK(A) values. This work focuses on a comparison between pK(A) computations obtained with the well-established FD method and with the new developed FE method mFES, solving the lPBE using protein crystal structures without conformational changes. Accurate and coarse model systems are set up with mFES using a similar number of unknowns compared with the FD method. Our FE method delivers results for computations of pK(A) values and interaction energies of titratable groups, which are comparable in accuracy. We introduce different thermodynamic cycles to evaluate pK(A) values and we show for the FE method how different parameters influence the accuracy of computed pK(A) values. © 2015 Wiley Periodicals, Inc.

  18. Soft computing methods in design of superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1995-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modeled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  19. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  20. Teaching and Learning with Mobile Computing Devices: Case Study in K-12 Classrooms

    ERIC Educational Resources Information Center

    Grant, Michael M.; Tamim, Suha; Brown, Dorian B.; Sweeney, Joseph P.; Ferguson, Fatima K.; Jones, Lakavious B.

    2015-01-01

    While ownership of mobile computing devices, such as cellphones, smartphones, and tablet computers, has been rapid, the adoption of these devices in K-12 classrooms has been measured. Some schools and individual teachers have integrated mobile devices to support teaching and learning. The purpose of this qualitative research was to describe the…

  1. Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 4: Conceptual design report

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex (PTC) at Marshall Space Flight Center (MSFC). The PTC will train the space station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. In the first step of this task, a methodology was developed to ensure that all relevant design dimensions were addressed, and that all feasible designs could be considered. The development effort yielded the following method for generating and comparing designs in task 4: (1) Extract SCS system requirements (functions) from the system specification; (2) Develop design evaluation criteria; (3) Identify system architectural dimensions relevant to SCS system designs; (4) Develop conceptual designs based on the system requirements and architectural dimensions identified in step 1 and step 3 above; (5) Evaluate the designs with respect to the design evaluation criteria developed in step 2 above. The results of the method detailed in the above 5 steps are discussed. The results of the task 4 work provide the set of designs which two or three candidate designs are to be selected by MSFC as input to task 5-refine SCS conceptual designs. The designs selected for refinement will be developed to a lower level of detail, and further analyses will be done to begin to determine the size and speed of the components required to implement these designs.

  2. A method to compute SEU fault probabilities in memory arrays with error correction

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.

  3. Fast three-dimensional inner volume excitations using parallel transmission and optimized k-space trajectories.

    PubMed

    Davids, Mathias; Schad, Lothar R; Wald, Lawrence L; Guérin, Bastien

    2016-10-01

    To design short parallel transmission (pTx) pulses for excitation of arbitrary three-dimensional (3D) magnetization patterns. We propose a joint optimization of the pTx radiofrequency (RF) and gradient waveforms for excitation of arbitrary 3D magnetization patterns. Our optimization of the gradient waveforms is based on the parameterization of k-space trajectories (3D shells, stack-of-spirals, and cross) using a small number of shape parameters that are well-suited for optimization. The resulting trajectories are smooth and sample k-space efficiently with few turns while using the gradient system at maximum performance. Within each iteration of the k-space trajectory optimization, we solve a small tip angle least-squares RF pulse design problem. Our RF pulse optimization framework was evaluated both in Bloch simulations and experiments on a 7T scanner with eight transmit channels. Using an optimized 3D cross (shells) trajectory, we were able to excite a cube shape (brain shape) with 3.4% (6.2%) normalized root-mean-square error in less than 5 ms using eight pTx channels and a clinical gradient system (Gmax  = 40 mT/m, Smax  = 150 T/m/s). This compared with 4.7% (41.2%) error for the unoptimized 3D cross (shells) trajectory. Incorporation of B0 robustness in the pulse design significantly altered the k-space trajectory solutions. Our joint gradient and RF optimization approach yields excellent excitation of 3D cube and brain shapes in less than 5 ms, which can be used for reduced field of view imaging and fat suppression in spectroscopy by excitation of the brain only. Magn Reson Med 76:1170-1182, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. New 5 kW free-piston Stirling space convertor developments

    NASA Astrophysics Data System (ADS)

    Brandhorst, Henry W., Jr.; Chapman, Peter A., Jr.

    2008-07-01

    The NASA Vision for Exploration of the moon may someday require a nuclear reactor coupled with a free-piston Stirling convertor at a power level of 30-40 kW. In the 1990s, Mechanical Technology Inc.'s Stirling Engine Systems Division (some of whose Stirling personnel are now at Foster-Miller, Inc.) developed a 25 kW free-piston Stirling Space Power Demonstrator Engine under the SP-100 program. This system consisted of two 12.5 kW engines connected at their hot ends and mounted in tandem to cancel vibration. Recently, NASA and DoE have been developing dual 55 and 80 W Stirling convertor systems for potential use with radioisotope heat sources. Total test times of all convertors in this effort exceed 120,000 h. Recently, NASA began a new project with Auburn University to develop a 5 kW, single convertor for potential use in a lunar surface reactor power system. Goals of this development program include a specific power in excess of 140 W/kg at the convertor level, lifetime in excess of five years and a control system that will safely manage the convertors in case of an emergency. Auburn University awarded a subcontract to Foster-Miller, Inc. to undertake development of the 5 kW Stirling convertor assembly. The characteristics of the design along with progress in developing the system will be described.

  5. Rapid Computation of Thermodynamic Properties over Multidimensional Nonbonded Parameter Spaces Using Adaptive Multistate Reweighting.

    PubMed

    Naden, Levi N; Shirts, Michael R

    2016-04-12

    We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free

  6. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.

    PubMed

    Wang, Xueyi

    2012-02-08

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.

  7. A Study of the Tablet Computer's Application in K-12 Schools in China

    ERIC Educational Resources Information Center

    Long, Taotao; Liang, Wenxin; Yu, Shengquan

    2013-01-01

    As an emerging mobile terminal, the tablet computer has begun to enter into the educational system. With the aim of having a better understanding of the application and people's perspectives on the new technology in K-12 schools in China, a survey was conducted to investigate the tablet computer's application, user's perspectives and requirements…

  8. A Computing Method for Sound Propagation Through a Nonuniform Jet Stream

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Liu, C. H.

    1974-01-01

    Understanding the principles of jet noise propagation is an essential ingredient of systematic noise reduction research. High speed computer methods offer a unique potential for dealing with complex real life physical systems whereas analytical solutions are restricted to sophisticated idealized models. The classical formulation of sound propagation through a jet flow was found to be inadequate for computer solutions and a more suitable approach was needed. Previous investigations selected the phase and amplitude of the acoustic pressure as dependent variables requiring the solution of a system of nonlinear algebraic equations. The nonlinearities complicated both the analysis and the computation. A reformulation of the convective wave equation in terms of a new set of dependent variables is developed with a special emphasis on its suitability for numerical solutions on fast computers. The technique is very attractive because the resulting equations are linear in nonwaving variables. The computer solution to such a linear system of algebraic equations may be obtained by well-defined and direct means which are conservative of computer time and storage space. Typical examples are illustrated and computational results are compared with available numerical and experimental data.

  9. Resource Handbook--Space Beyond the Earth. A Supplement to Basic Curriculum Guide--Science, Grades K-6.

    ERIC Educational Resources Information Center

    Starr, John W., 3rd., Ed.

    GRADES OR AGES: Grades K-6. SUBJECT MATTER: Science; space. ORGANIZATION AND PHYSICAL APPEARANCE: The guide is divided into four units: 1) the sun, earth, and moon; 2) stars and planets; 3) exploring space; 4) man's existence in space. Each unit includes initiatory and developmental activities. There are also sections on evaluation, vocabulary,…

  10. Computational methods in drug discovery

    PubMed Central

    Leelananda, Sumudu P

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed. PMID:28144341

  11. Computational methods in drug discovery.

    PubMed

    Leelananda, Sumudu P; Lindert, Steffen

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  12. Reconfigurable Computing Concepts for Space Missions: Universal Modular Spares

    NASA Technical Reports Server (NTRS)

    Patrick, M. Clinton

    2007-01-01

    Computing hardware for control, data collection, and other purposes will prove many times over crucial resources in NASA's upcoming space missions. Ability to provide these resources within mission payload requirements, with the hardiness to operate for extended periods under potentially harsh conditions in off-World environments, is daunting enough without considering the possibility of doing so with conventional electronics. This paper examines some ideas and options, and proposes some initial approaches, for logical design of reconfigurable computing resources offering true modularity, universal compatibility, and unprecedented flexibility to service all forms and needs of mission infrastructure.

  13. Calbindins decreased after space flight

    NASA Technical Reports Server (NTRS)

    Sergeev, I. N.; Rhoten, W. B.; Carney, M. D.

    1996-01-01

    Exposure of the body to microgravity during space flight causes a series of well-documented changes in Ca2+ metabolism, yet the cellular and molecular mechanisms leading to these changes are poorly understood. Calbindins, vitamin D-dependent Ca2+ binding proteins, are believed to have a significant role in maintaining cellular Ca2+ homeostasis. In this study, we used biochemical and immunocytochemical approaches to analyze the expression of calbindin-D28k and calbindin-D9k in kidneys, small intestine, and pancreas of rats flown for 9 d aboard the space shuttle. The effects of microgravity on calbindins in rats from space were compared with synchronous Animal Enclosure Module controls, modeled weightlessness animals (tail suspension), and their controls. Exposure to microgravity resulted in a significant and sustained decrease in calbindin-D28k content in the kidney and calbindin-D9k in the small intestine of flight animals, as measured by enzyme-linked immunosorbent assay (ELISA). Modeled weightlessness animals exhibited a similar decrease in calbindins by ELISA. Immunocytochemistry (ICC) in combination with quantitative computer image analysis was used to measure in situ the expression of calbindins in the kidney and the small intestine, and the expression of insulin in pancreas. There was a large decrease of immunoreactivity in renal distal tubular cell-associated calbindin-D28k and in intestinal absorptive cell-associated calbindin-D9k of space flight and modeled weightlessness animals compared with matched controls. No consistent difference in pancreatic insulin immunoreactivity between space flight, modeled weightlessness, and controls was observed. Regression analysis of results obtained by quantitative ICC and ELISA for space flight, modeled weightlessness animals, and their controls demonstrated a significant correlation. These findings after a short-term exposure to microgravity or modeled weightlessness suggest that a decreased expression of calbindins

  14. Computing Nash equilibria through computational intelligence methods

    NASA Astrophysics Data System (ADS)

    Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.

    2005-03-01

    Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.

  15. Status of 20 kHz space station power distribution technology

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.

    1988-01-01

    Power Distribution on the NASA Space Station will be accomplished by a 20 kHz sinusoidal, 440 VRMS, single phase system. In order to minimize both system complexity and the total power coversion steps required, high frequency power will be distributed end-to-end in the system. To support the final design of flight power system hardware, advanced development and demonstrations have been made on key system technologies and components. The current status of this program is discussed.

  16. Computing observables in curved multifield models of inflation—A guide (with code) to the transport method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dias, Mafalda; Seery, David; Frazer, Jonathan, E-mail: m.dias@sussex.ac.uk, E-mail: j.frazer@sussex.ac.uk, E-mail: a.liddle@sussex.ac.uk

    We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.

  17. A 100 kW-Class Technology Demonstrator for Space Solar Power

    NASA Technical Reports Server (NTRS)

    Carrington, Connie; Howell, Joe; Day, Greg

    2004-01-01

    A first step in the development of solar power from space is the flight demonstration of critical technologies. These fundamental technologies include efficient solar power collection and generation, power management and distribution, and thermal management. In addition, the integration and utilization of these technologies into a viable satellite bus could provide an energy-rich platform for a portfolio of payload experiments such as wireless power transmission (WPT). This paper presents the preliminary design of a concept for a 100 kW-class fiee-flying platform suitable for flight demonstration of technology experiments. Recent space solar power (SSP) studies by NASA have taken a stepping stones approach that lead to the gigawatt systems necessary to cost-effectively deliver power from space. These steps start with a 100 kW-class satellite, leading to a 500 kW and then a 1 MW-class platform. Later steps develop a 100 M W bus that could eventually lead to a 1-2 GW pilot plant for SSP. Our studies have shown that a modular approach is cost effective. Modular designs include individual laser-power-beaming satellites that fly in constellations or that are autonomously assembled into larger structures at geosynchronous orbit (GEO). Microwave power-beamed approaches are also modularized into large numbers of identical units of solar arrays, power converters, or supporting structures for arrays and microwave transmitting antennas. A cost-effective approach to launching these modular units is to use existing Earth-to-orbit (ETO) launch systems, in which the modules are dropped into low Earth orbit (LEO) and then the modules perform their own orbit transfer to GEO using expendable solar arrays to power solar electric thrusters. At GEO, the modules either rendezvous and are assembled robotically into larger platforms, or are deployed into constellations of identical laser power-beaming satellites. Since solar electric propulsion by the modules is cost-effective for both

  18. From Discrete Space-Time to Minkowski Space: Basic Mechanisms, Methods and Perspectives

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    This survey article reviews recent results on fermion systems in discrete space-time and corresponding systems in Minkowski space. After a basic introduction to the discrete setting, we explain a mechanism of spontaneous symmetry breaking which leads to the emergence of a discrete causal structure. As methods to study the transition between discrete space-time and Minkowski space, we describe a lattice model for a static and isotropic space-time, outline the analysis of regularization tails of vacuum Dirac sea configurations, and introduce a Lorentz invariant action for the masses of the Dirac seas. We mention the method of the continuum limit, which allows to analyze interacting systems. Open problems are discussed.

  19. Danger zone analysis using cone beam computed tomography after apical enlargement with K3 and K3XF in a manikin model

    PubMed Central

    Olivier, Juan-Gonzalo; García-Font, Marc; Gonzalez-Sanchez, Jose-Antonio; Roig-Cayon, Miguel

    2016-01-01

    Background The objective of the study was to evaluate and compare how apical enlargement with K3 and K3XF nickel-titanium (NiTi) rotary instruments reduces the root thickness in the danger zone and affects canal transportation and centering ability in mandibular molar mesial canals in a manikin extracted tooth model. Material and Methods Seventy-two mesial root canals of first mandibular molars were instrumented. Initial and post-instrumentation Cone Beam Computed Tomography scans were performed after root canal preparation up to size 25, 30, 35 and 40 files. Canal transportation, canal centering and remaining root dentin thickness toward the danger zone were calculated in sections 1, 2 and 3 mm under the furcation level. Data were analyzed using non-parametric Kruskal-Wallis analysis of variance at a significance level of P < 0.05. Results K3 instruments removed more dentin toward the danger zone compared with K3XF instruments (P< .05) and significant differences in dentin thickness were found when canal enlargement was performed to a #35-40 with both systems (P< 0.05). No significant differences in canal transportation and centering ability were found between systems, except when canal enlargement was performed to a #40 (P = 0,0136). No differences were observed when comparing the number of uses in both systems (P> 0.05). Conclusions Under the conditions of this study K3 removed a significant amount of dentin at the furcation level compared with the R-Phase K3XF rotary system in curved root canals. Enlargement to a 35-40/04 file removed significantly more dentin with both systems. Key words:K3, K3XF, R-phase, center ability, canal transportation, dentin thickness, increased apical enlargement, danger zone, dentin thickness. PMID:27703602

  20. Characterization of the 300 K and 700 K Calibration Sources for Space Application with the Bepicolombo Mission to Mercury

    NASA Astrophysics Data System (ADS)

    Gutschwager, B.; Driescher, H.; Herrmann, J.; Hirsch, H.; Hollandt, J.; Jahn, H.; Kuchling, P.; Monte, C.; Scheiding, M.

    2011-08-01

    The Mercury Radiometer and Thermal Infrared Spectrometer (MERTIS) onboard the European-Japanese space mission BepiColombo to Mercury will be launched in 2014. The MERTIS scientific objective is to identify rock-forming minerals and measure surface temperatures by infrared spectroscopy (7 μm to 14 μm) and spectrally unresolved infrared radiometry (7 μm to 40 μm). To achieve this goal, MERTIS utilizes two onboard infrared calibration sources, the MERTIS blackbody at 700 K (MBB7) and the MERTIS blackbody at 300 K (MBB3), together with deep space observations corresponding to 3 K. All three sources can be observed one after the other using a rotating mirror system. The leaders of the project MERTIS are the Westfälische University of Münster, institute for planetary investigation, Mr. Prof. Dr. H. Hiesinger (PI) and the DLR, Institute of Planetary Research Berlin-Adlershof, Mr. Dr. J. Helbert (CoPI). Both blackbody radiators have to fulfill the severe mass, volume, and power restrictions of MERTIS. The radiating area of the MBB3 is based on a structured surface with a high-emissivity space qualified coating. The relatively high emissivity of the coating was further enhanced by a pyramidal surface structure to values over 0.99 in the wavelength range from 5 μm to 10 μm and over 0.95 in the wavelength range from 10 μm to 30 μm. The MBB7 is based on a small commercially available surface emitter in a standard housing. The windowless emitter is an electrically heated resistor, which consists of a platinum structure with a blackened surface on a ceramic body. The radiation of the emitter is expanded and collimated through use of a parabolic mirror. The design requirements and the radiometric and thermometric characterization of these two blackbodies are described in this paper.

  1. Computed tomographic venography for varicose veins of the lower extremities: prospective comparison of 80-kVp and conventional 120-kVp protocols.

    PubMed

    Cho, Eun-Suk; Kim, Joo Hee; Kim, Sungjun; Yu, Jeong-Sik; Chung, Jae-Joon; Yoon, Choon-Sik; Lee, Hyeon-Kyeong; Lee, Kyung Hee

    2012-01-01

    To prospectively investigate the feasibility of an 80-kilovolt (peak) (kVp) protocol in computed tomographic venography for varicose veins of the lower extremities by comparison with conventional 120-kVp protocol. Attenuation values and signal-to-noise ratio of iodine contrast medium (CM) were determined in a water phantom for 2 tube voltages (80 kVp and 120 kVp). Among 100 patients, 50 patients were scanned with 120 kVp and 150 effective milliampere second (mAs(eff)), and the other 50 patients were scanned with 80 kVp and 390 mAs(eff) after the administration of 1.7-mL/kg CM (370 mg of iodine per milliliter). The 2 groups were compared for venous attenuation, contrast-to-noise ratio, and subjective degree of venous enhancement, image noise, and overall diagnostic image quality. In the phantom, the attenuation value and signal-to-noise ratio value for iodine CM at 80 kVp were 63.8% and 33.0% higher, respectively, than those obtained at 120 kVp. The mean attenuation of the measured veins of the lower extremities was 148.3 Hounsfield units (HU) for the 80-kVp protocol and 94.8 HU for the 120-kVp protocol. Contrast-to-noise ratio was also significantly higher with the 80-kVp protocol. The overall diagnostic image quality of the 3-dimensional volume-rendered images was good with both protocols. The subjective score for venous enhancement was higher at the 80-kVp protocol. The mean volume computed tomography dose index of the 80-kVp (5.6 mGy) protocol was 23.3% lower than that of the 120-kVp (7.3 mGy) protocol. The use of the 80-kVp protocol improved overall venous attenuation, especially in perforating vein, and provided similarly high diagnostic image quality with a lower radiation dose when compared to the conventional 120-kVp protocol.

  2. Reduction of respiratory ghosting motion artifacts in conventional two-dimensional multi-slice Cartesian turbo spin-echo: which k-space filling order is the best?

    PubMed

    Inoue, Yuuji; Yoneyama, Masami; Nakamura, Masanobu; Takemura, Atsushi

    2018-06-01

    The two-dimensional Cartesian turbo spin-echo (TSE) sequence is widely used in routine clinical studies, but it is sensitive to respiratory motion. We investigated the k-space orders in Cartesian TSE that can effectively reduce motion artifacts. The purpose of this study was to demonstrate the relationship between k-space order and degree of motion artifacts using a moving phantom. We compared the degree of motion artifacts between linear and asymmetric k-space orders. The actual spacing of ghost artifacts in the asymmetric order was doubled compared with that in the linear order in the free-breathing situation. The asymmetric order clearly showed less sensitivity to incomplete breath-hold at the latter half of the imaging period. Because of the actual number of partitions of the k-space and the temporal filling order, the asymmetric k-space order of Cartesian TSE was superior to the linear k-space order for reduction of ghosting motion artifacts.

  3. Computational methods for unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Thomas, J. L.

    1987-01-01

    Computational methods for unsteady transonic flows are surveyed with emphasis on prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed, and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

  4. Using computer graphics to design Space Station Freedom viewing

    NASA Technical Reports Server (NTRS)

    Goldsberry, Betty S.; Lippert, Buddy O.; Mckee, Sandra D.; Lewis, James L., Jr.; Mount, Francis E.

    1993-01-01

    Viewing requirements were identified early in the Space Station Freedom program for both direct viewing via windows and indirect viewing via cameras and closed-circuit television (CCTV). These requirements reside in NASA Program Definition and Requirements Document (PDRD), Section 3: Space Station Systems Requirements. Currently, analyses are addressing the feasibility of direct and indirect viewing. The goal of these analyses is to determine the optimum locations for the windows, cameras, and CCTV's in order to meet established requirements, to adequately support space station assembly, and to operate on-board equipment. PLAID, a three-dimensional computer graphics program developed at NASA JSC, was selected for use as the major tool in these analyses. PLAID provides the capability to simulate the assembly of the station as well as to examine operations as the station evolves. This program has been used successfully as a tool to analyze general viewing conditions for many Space Shuttle elements and can be used for virtually all Space Station components. Additionally, PLAID provides the ability to integrate an anthropometric scale-modeled human (representing a crew member) with interior and exterior architecture.

  5. Equivariant K3 invariants

    DOE PAGES

    Cheng, Miranda C. N.; Duncan, John F. R.; Harrison, Sarah M.; ...

    2017-01-01

    In this note, we describe a connection between the enumerative geometry of curves in K3 surfaces and the chiral ring of an auxiliary superconformal field theory. We consider the invariants calculated by Yau–Zaslow (capturing the Euler characters of the moduli spaces of D2-branes on curves of given genus), together with their refinements to carry additional quantum numbers by Katz–Klemm–Vafa (KKV), and Katz–Klemm–Pandharipande (KKP). We show that these invariants can be reproduced by studying the Ramond ground states of an auxiliary chiral superconformal field theory which has recently been observed to give rise to mock modular moonshine for a variety ofmore » sporadic simple groups that are subgroups of Conway’s group. We also study equivariant versions of these invariants. A K3 sigma model is specified by a choice of 4-plane in the K3 D-brane charge lattice. Symmetries of K3 sigma models are naturally identified with 4-plane preserving subgroups of the Conway group, according to the work of Gaberdiel–Hohenegger–Volpato, and one may consider corresponding equivariant refined K3 Gopakumar–Vafa invariants. The same symmetries naturally arise in the auxiliary CFT state space, affording a suggestive alternative view of the same computation. We comment on a lift of this story to the generating function of elliptic genera of symmetric products of K3 surfaces.« less

  6. A simple method for predicting solar fractions of IPH and space heating systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauhan, R.; Goodling, J.S.

    1982-01-01

    In this paper, a method has been developed to evaluate the solar fractions of liquid based industrial process heat (IPH) and space heating systems, without the use of computer simulations. The new method is the result of joining two theories, Lunde's equation to determine monthly performance of solar heating systems and the utilizability correlations of Collares-Pereira and Rabl by making appropriate assumptions. The new method requires the input of the monthly averages of the utilizable radiation and the collector operating time. These quantities are determined conveniently by the method of Collares-Pereira and Rabl. A comparison of the results of themore » new method with the most acceptable design methods shows excellent agreement.« less

  7. On computational methods for crashworthiness

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  8. Finding Planets in K2: A New Method of Cleaning the Data

    NASA Astrophysics Data System (ADS)

    Currie, Miles; Mullally, Fergal; Thompson, Susan E.

    2017-01-01

    We present a new method of removing systematic flux variations from K2 light curves by employing a pixel-level principal component analysis (PCA). This method decomposes the light curves into its principal components (eigenvectors), each with an associated eigenvalue, the value of which is correlated to how much influence the basis vector has on the shape of the light curve. This method assumes that the most influential basis vectors will correspond to the unwanted systematic variations in the light curve produced by K2’s constant motion. We correct the raw light curve by automatically fitting and removing the strongest principal components. The strongest principal components generally correspond to the flux variations that result from the motion of the star in the field of view. Our primary method of calculating the strongest principal components to correct for in the raw light curve estimates the noise by measuring the scatter in the light curve after using an algorithm for Savitsy-Golay detrending, which computes the combined photometric precision value (SG-CDPP value) used in classic Kepler. We calculate this value after correcting the raw light curve for each element in a list of cumulative sums of principal components so that we have as many noise estimate values as there are principal components. We then take the derivative of the list of SG-CDPP values and take the number of principal components that correlates to the point at which the derivative effectively goes to zero. This is the optimal number of principal components to exclude from the refitting of the light curve. We find that a pixel-level PCA is sufficient for cleaning unwanted systematic and natural noise from K2’s light curves. We present preliminary results and a basic comparison to other methods of reducing the noise from the flux variations.

  9. Phase space methods in HMD systems

    NASA Astrophysics Data System (ADS)

    Babington, James

    2017-06-01

    We consider using phase space techniques and methods in analysing optical ray propagation in head mounted display systems. Two examples are considered that illustrate the concepts and methods. Firstly, a shark tooth freeform geometry, and secondly, a waveguide geometry that replicates a pupil in one dimension. Classical optics and imaging in particular provide a natural stage to employ phase space techniques, albeit as a constrained system. We consider how phase space provides a global picture of the physical ray trace data. As such, this gives a complete optical world history of all of the rays propagating through the system. Using this data one can look at, for example, how aberrations arise on a surface by surface basis. These can be extracted numerically from phase space diagrams in the example of a freeform imaging prism. For the waveguide geometry, phase space diagrams provide a way of illustrating how replicated pupils behave and what these imply for design considerations such as tolerances.

  10. Space station thermal control surfaces. [space radiators

    NASA Technical Reports Server (NTRS)

    Maag, C. R.; Millard, J. M.; Jeffery, J. A.; Scott, R. R.

    1979-01-01

    Mission planning documents were used to analyze the radiator design and thermal control surface requirements for both space station and 25-kW power module, to analyze the missions, and to determine the thermal control technology needed to satisfy both sets of requirements. Parameters such as thermal control coating degradation, vehicle attitude, self eclipsing, variation in solar constant, albedo, and Earth emission are considered. Four computer programs were developed which provide a preliminary design and evaluation tool for active radiator systems in LEO and GEO. Two programs were developed as general programs for space station analysis. Both types of programs find the radiator-flow solution and evaluate external heat loads in the same way. Fortran listings are included.

  11. Swellix: a computational tool to explore RNA conformational space.

    PubMed

    Sloat, Nathan; Liu, Jui-Wen; Schroeder, Susan J

    2017-11-21

    The sequence of nucleotides in an RNA determines the possible base pairs for an RNA fold and thus also determines the overall shape and function of an RNA. The Swellix program presented here combines a helix abstraction with a combinatorial approach to the RNA folding problem in order to compute all possible non-pseudoknotted RNA structures for RNA sequences. The Swellix program builds on the Crumple program and can include experimental constraints on global RNA structures such as the minimum number and lengths of helices from crystallography, cryoelectron microscopy, or in vivo crosslinking and chemical probing methods. The conceptual advance in Swellix is to count helices and generate all possible combinations of helices rather than counting and combining base pairs. Swellix bundles similar helices and includes improvements in memory use and efficient parallelization. Biological applications of Swellix are demonstrated by computing the reduction in conformational space and entropy due to naturally modified nucleotides in tRNA sequences and by motif searches in Human Endogenous Retroviral (HERV) RNA sequences. The Swellix motif search reveals occurrences of protein and drug binding motifs in the HERV RNA ensemble that do not occur in minimum free energy or centroid predicted structures. Swellix presents significant improvements over Crumple in terms of efficiency and memory use. The efficient parallelization of Swellix enables the computation of sequences as long as 418 nucleotides with sufficient experimental constraints. Thus, Swellix provides a practical alternative to free energy minimization tools when multiple structures, kinetically determined structures, or complex RNA-RNA and RNA-protein interactions are present in an RNA folding problem.

  12. 29 CFR 548.500 - Methods of computation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Computation of Overtime Pay § 548.500 Methods of computation. The methods of computing overtime pay on the basic rates for piece... pay at the regular rate. Example 1. Under an employment agreement the basic rate to be used in...

  13. Fast, exact k-space sample density compensation for trajectories composed of rotationally symmetric segments, and the SNR-optimized image reconstruction from non-Cartesian samples.

    PubMed

    Mitsouras, Dimitris; Mulkern, Robert V; Rybicki, Frank J

    2008-08-01

    A recently developed method for exact density compensation of non uniformly arranged samples relies on the analytically known cross-correlations of Fourier basis functions corresponding to the traced k-space trajectory. This method produces a linear system whose solution represents compensated samples that normalize the contribution of each independent element of information that can be expressed by the underlying trajectory. Unfortunately, linear system-based density compensation approaches quickly become computationally demanding with increasing number of samples (i.e., image resolution). Here, it is shown that when a trajectory is composed of rotationally symmetric interleaves, such as spiral and PROPELLER trajectories, this cross-correlations method leads to a highly simplified system of equations. Specifically, it is shown that the system matrix is circulant block-Toeplitz so that the linear system is easily block-diagonalized. The method is described and demonstrated for 32-way interleaved spiral trajectories designed for 256 image matrices; samples are compensated non iteratively in a few seconds by solving the small independent block-diagonalized linear systems in parallel. Because the method is exact and considers all the interactions between all acquired samples, up to a 10% reduction in reconstruction error concurrently with an up to 30% increase in signal to noise ratio are achieved compared to standard density compensation methods. (c) 2008 Wiley-Liss, Inc.

  14. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  15. Computation of an Underexpanded 3-D Rectangular Jet by the CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Himansu, Ananda; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2000-01-01

    Recently, an unstructured three-dimensional space-time conservation element and solution element (CE/SE) Euler solver was developed. Now it is also developed for parallel computation using METIS for domain decomposition and MPI (message passing interface). The method is employed here to numerically study the near-field of a typical 3-D rectangular under-expanded jet. For the computed case-a jet with Mach number Mj = 1.6. with a very modest grid of 1.7 million tetrahedrons, the flow features such as the shock-cell structures and the axis switching, are in good qualitative agreement with experimental results.

  16. Computational Methods Development at Ames

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  17. Interactive computer graphics and its role in control system design of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.

    1985-01-01

    This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.

  18. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  19. A theoretical method for selecting space craft and space suit atmospheres.

    PubMed

    Vann, R D; Torre-Bueno, J R

    1984-12-01

    A theoretical method for selecting space craft and space suit atmospheres assumes that gas bubbles cause decompression sickness and that the risk increases when a critical bubble volume is exceeded. The method is consistent with empirical decompression exposures for humans under conditions of nitrogen equilibrium between the lungs and tissues. Space station atmospheres are selected so that flight crews may decompress immediately from sea level to station pressure without preoxygenation. Bubbles form as a result of this decompression but are less than the critical volume. The bubbles are absorbed during an equilibration period after which immediate transition to suit pressure is possible. Exercise after decompression and incomplete nitrogen equilibrium are shown to increase bubble size, and limit the usefulness of one previously tested stage decompression procedure for the Shuttle. The method might be helpful for evaluating decompression procedures before testing.

  20. Updated Panel-Method Computer Program

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1995-01-01

    Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

  1. Space Station Simulation Computer System (SCS) study for NASA/MSFC. Phased development plan

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.

  2. Space Station Simulation Computer System (SCS) study for NASA/MSFC. Operations concept report

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.

  3. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions (a...

  4. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions (a...

  5. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions (a...

  6. Description of a 2.3 kW power transformer for space applications

    NASA Technical Reports Server (NTRS)

    Hansen, I.

    1979-01-01

    The principle features and special testing of a high voltage high power transformer designed and developed for space application are described. The transformer is operated in a series resonant inverter supplying beam power to a 30 cm mercury ion thruster. Electrical requirements include operation of 2.3 kW continuous power output, primary currents to 35 amps rms, and frequencies up to 20 kHz. High efficiency was obtained through detailed considerations of the tradeoffs available in core materials, wire selection, coil configurations and thermal control. A number of novel heat removal techniques are discussed which control the winding temperature using only the available conductive cooling.

  7. Protecting intellectual property in space; Proceedings of the Aerospace Computer Security Conference, McLean, VA, March 20, 1985

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary purpose of the Aerospace Computer Security Conference was to bring together people and organizations which have a common interest in protecting intellectual property generated in space. Operational concerns are discussed, taking into account security implications of the space station information system, Space Shuttle security policies and programs, potential uses of probabilistic risk assessment techniques for space station development, key considerations in contingency planning for secure space flight ground control centers, a systematic method for evaluating security requirements compliance, and security engineering of secure ground stations. Subjects related to security technologies are also explored, giving attention to processing requirements of secure C3/I and battle management systems and the development of the Gemini trusted multiple microcomputer base, the Restricted Access Processor system as a security guard designed to protect classified information, and observations on local area network security.

  8. Stellar model chromospheres. VIII - 70 Ophiuchi A /K0 V/ and Epsilon Eridani /K2 V/

    NASA Technical Reports Server (NTRS)

    Kelch, W. L.

    1978-01-01

    Model atmospheres for the late-type active-chromosphere dwarf stars 70 Oph A and Epsilon Eri are computed from high-resolution Ca II K line profiles as well as Mg II h and k line fluxes. A method is used which determines a plane-parallel homogeneous hydrostatic-equilibrium model of the upper photosphere and chromosphere which differs from theoretical models by lacking the constraint of radiative equilibrium (RE). The determinations of surface gravities, metallicities, and effective temperatures are discussed, and the computational methods, model atoms, atomic data, and observations are described. Temperature distributions for the two stars are plotted and compared with RE models for the adopted effective temperatures and gravities. The previously investigated T min/T eff vs. T eff relation is extended to Epsilon Eri and 70 Oph A, observed and computed Ca II K and Mg II h and k integrated emission fluxes are compared, and full tabulations are given for the proposed models. It is suggested that if less than half the observed Mg II flux for the two stars is lost in noise, the difference between an active-chromosphere star and a quiet-chromosphere star lies in the lower-chromospheric temperature gradient.

  9. Sub-domain decomposition methods and computational controls for multibody dynamical systems. [of spacecraft structures

    NASA Technical Reports Server (NTRS)

    Menon, R. G.; Kurdila, A. J.

    1992-01-01

    This paper presents a concurrent methodology to simulate the dynamics of flexible multibody systems with a large number of degrees of freedom. A general class of open-loop structures is treated and a redundant coordinate formulation is adopted. A range space method is used in which the constraint forces are calculated using a preconditioned conjugate gradient method. By using a preconditioner motivated by the regular ordering of the directed graph of the structures, it is shown that the method is order N in the total number of coordinates of the system. The overall formulation has the advantage that it permits fine parallelization and does not rely on system topology to induce concurrency. It can be efficiently implemented on the present generation of parallel computers with a large number of processors. Validation of the method is presented via numerical simulations of space structures incorporating large number of flexible degrees of freedom.

  10. A computational approach for hypersonic nonequilibrium radiation utilizing space partition algorithm and Gauss quadrature

    NASA Astrophysics Data System (ADS)

    Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.

    2014-06-01

    An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.

  11. Space mapping method for the design of passive shields

    NASA Astrophysics Data System (ADS)

    Sergeant, Peter; Dupré, Luc; Melkebeek, Jan

    2006-04-01

    The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.

  12. Electric Propulsion Options for 10 kW Class Earth-Space Missions

    NASA Technical Reports Server (NTRS)

    Patterson, M. J.; Curran, Francis M.

    1989-01-01

    Five and 10 kW ion and arcjet propulsion system options for a near-term space demonstration experiment were evaluated. Analyses were conducted to determine first-order propulsion system performance and system component mass estimates. Overall mission performance of the electric propulsion systems was quantified in terms of the maximum thrusting time, total impulse, and velocity increment capability available when integrated onto a generic spacecraft under fixed mission model assumptions. Maximum available thrusting times for the ion-propelled spacecraft options, launched on a DELTA 2 6920 vehicle, range from approximately 8,600 hours for a 4-engine 10 kW system to more than 29,600 hours for a single-engine 5 kW system. Maximum total impulse values and maximum delta-v's range from 1.2x10 (exp 7) to 2.1x10 (exp 7) N-s, and 3550 to 6200 m/s, respectively. Maximum available thrusting times for the arcjet propelled spacecraft launched on the DELTA 2 6920 vehicle range from approximately 528 hours for the 6-engine 10 kW hydrazine system to 2328 hours for the single-engine 5 kW system. Maximum total impulse values and maximum delta-v's range from 2.2x10 (exp 6) to 3.6x10 (exp 6) N-s, and approximately 662 to 1072 m/s, respectively.

  13. Electric propulsion options for 10 kW class earth space missions

    NASA Technical Reports Server (NTRS)

    Patterson, M. J.; Curran, Francis M.

    1989-01-01

    Five and 10 kW ion and arcjet propulsion system options for a near-term space demonstration experiment have been evaluated. Analyses were conducted to determine first-order propulsion system performance and system component mass estimates. Overall mission performance of the electric propulsion systems was quantified in terms of the maximum thrusting time, total impulse, and velocity increment capability available when integrated onto a generic spacecraft under fixed mission model assumptions. Maximum available thrusting times for the ion-propelled spacecraft options, launched on a DELTA II 6920 vehicle, range from approximately 8,600 hours for a 4-engine 10 kW system to more than 29,600 hours for a single-engine 5 kW system. Maximum total impulse values and maximum delta-v's range from 1.2x10(7) to 2.1x10(7) N-s, and 3550 to 6200 m/s, respectively. Maximum available thrusting times for the arcjet propelled spacecraft launched on the DELTA II 6920 vehicle range from approximately 528 hours for the 6-engine 10 kW hydrazine system to 2328 hours for the single-engine 5 kW system. Maximum total impulse values and maximum delta-v's range from 2.2x10(6) to 3.6x10(6) N-s, and approximately 662 to 1072 m/s, respectively.

  14. Perspectives and Visions of Computer Science Education in Primary and Secondary (K-12) Schools

    ERIC Educational Resources Information Center

    Hubwieser, Peter; Armoni, Michal; Giannakos, Michail N.; Mittermeir, Roland T.

    2014-01-01

    In view of the recent developments in many countries, for example, in the USA and in the UK, it appears that computer science education (CSE) in primary or secondary schools (K-12) has reached a significant turning point, shifting its focus from ICT-oriented to rigorous computer science concepts. The goal of this special issue is to offer a…

  15. On the Hodge-type decomposition and cohomology groups of k-Cauchy-Fueter complexes over domains in the quaternionic space

    NASA Astrophysics Data System (ADS)

    Chang, Der-Chen; Markina, Irina; Wang, Wei

    2016-09-01

    The k-Cauchy-Fueter operator D0(k) on one dimensional quaternionic space H is the Euclidean version of spin k / 2 massless field operator on the Minkowski space in physics. The k-Cauchy-Fueter equation for k ≥ 2 is overdetermined and its compatibility condition is given by the k-Cauchy-Fueter complex. In quaternionic analysis, these complexes play the role of Dolbeault complex in several complex variables. We prove that a natural boundary value problem associated to this complex is regular. Then by using the theory of regular boundary value problems, we show the Hodge-type orthogonal decomposition, and the fact that the non-homogeneous k-Cauchy-Fueter equation D0(k) u = f on a smooth domain Ω in H is solvable if and only if f satisfies the compatibility condition and is orthogonal to the set ℋ(k)1 (Ω) of Hodge-type elements. This set is isomorphic to the first cohomology group of the k-Cauchy-Fueter complex over Ω, which is finite dimensional, while the second cohomology group is always trivial.

  16. 20 plus Years of Computational Fluid Dynamics for the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Gomez, Reynaldo J., III

    2011-01-01

    This slide presentation reviews the use of computational fluid dynamics in performing analysis of the space shuttle with particular reference to the return to flight analysis and other shuttle problems. Slides show a comparison of pressure coefficient with the shuttle ascent configuration between the wind tunnel test and the computed values. the evolution of the grid system for the space shuttle launch vehicle (SSLv) from the early 80's to one in 2004, the grid configuration of the bipod ramp redesign from the original design to the current configuration, charts with the computations showing solid rocket booster surface pressures from wind tunnel data, calculated over two grid systems (i.e., the original 14 grid system, and the enhanced 113 grid system), and the computed flight orbiter wing loads are compared with strain gage data on STS-50 during flight. The loss of STS-107 initiated an unprecedented review of all external environments. The current SSLV grid system of 600+ grids, 1.8 Million surface points and 95+ million volume points is shown. The inflight entry analyses is shown, and the use of Overset CFD as a key part to many external tank redesign and debris assessments is discussed. The work that still remains to be accomplished for future shuttle flights is discussed.

  17. A method for transferring NASTRAN data between dissimilar computers. [application to CDC 6000 series, IBM 360-370 series, and Univac 1100 series computers

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.

    1973-01-01

    The NASTRAN computer program is capable of executing on three different types of computers: (1) the CDC 6000 series, (2) the IBM 360-370 series, and (3) the Univac 1100 series. A typical activity requiring transfer of data between dissimilar computers is the analysis of a large structure such as the space shuttle by substructuring. Models of portions of the vehicle which have been analyzed by subcontractors using their computers must be integrated into a model of the complete structure by the prime contractor on his computer. Presently the transfer of NASTRAN matrices or tables between two different types of computers is accomplished by punched cards or a magnetic tape containing card images. These methods of data transfer do not satisfy the requirements for intercomputer data transfer associated with a substructuring activity. To provide a more satisfactory transfer of data, two new programs, RDUSER and WRTUSER, were created.

  18. k-space image correlation to probe the intracellular dynamics of gold nanoparticles

    NASA Astrophysics Data System (ADS)

    Bouzin, M.; Sironi, L.; Chirico, G.; D'Alfonso, L.; Inverso, D.; Pallavicini, P.; Collini, M.

    2016-04-01

    The collective action of dynein, kinesin and myosin molecular motors is responsible for the intracellular active transport of cargoes, vesicles and organelles along the semi-flexible oriented filaments of the cytoskeleton. The overall mobility of the cargoes upon binding and unbinding to motor proteins can be modeled as an intermittency between Brownian diffusion in the cell cytoplasm and active ballistic excursions along actin filaments or microtubules. Such an intermittent intracellular active transport, exhibited by star-shaped gold nanoparticles (GNSs, Gold Nanostars) upon internalization in HeLa cancer cells, is investigated here by combining live-cell time-lapse confocal reflectance microscopy and the spatio-temporal correlation, in the reciprocal Fourier space, of the acquired image sequences. At first, the analytical theoretical framework for the investigation of a two-state intermittent dynamics is presented for Fourier-space Image Correlation Spectroscopy (kICS). Then simulated kICS correlation functions are employed to evaluate the influence of, and sensitivity to, all the kinetic and dynamic parameters the model involves (the transition rates between the diffusive and the active transport states, the diffusion coefficient and drift velocity of the imaged particles). The optimal procedure for the analysis of the experimental data is outlined and finally exploited to derive whole-cell maps for the parameters underlying the GNSs super-diffusive dynamics. Applied here to the GNSs subcellular trafficking, the proposed kICS analysis can be adopted for the characterization of the intracellular (super-) diffusive dynamics of any fluorescent or scattering biological macromolecule.

  19. Comparison of FDTD numerical computations and analytical multipole expansion method for plasmonics-active nanosphere dimers.

    PubMed

    Dhawan, Anuj; Norton, Stephen J; Gerhold, Michael D; Vo-Dinh, Tuan

    2009-06-08

    This paper describes a comparative study of finite-difference time-domain (FDTD) and analytical evaluations of electromagnetic fields in the vicinity of dimers of metallic nanospheres of plasmonics-active metals. The results of these two computational methods, to determine electromagnetic field enhancement in the region often referred to as "hot spots" between the two nanospheres forming the dimer, were compared and a strong correlation observed for gold dimers. The analytical evaluation involved the use of the spherical-harmonic addition theorem to relate the multipole expansion coefficients between the two nanospheres. In these evaluations, the spacing between two nanospheres forming the dimer was varied to obtain the effect of nanoparticle spacing on the electromagnetic fields in the regions between the nanostructures. Gold and silver were the metals investigated in our work as they exhibit substantial plasmon resonance properties in the ultraviolet, visible, and near-infrared spectral regimes. The results indicate excellent correlation between the two computational methods, especially for gold nanosphere dimers with only a 5-10% difference between the two methods. The effect of varying the diameters of the nanospheres forming the dimer, on the electromagnetic field enhancement, was also studied.

  20. A stoichiometric calibration method for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

    2014-04-01

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic

  1. Social network analysis using k-Path centrality method

    NASA Astrophysics Data System (ADS)

    Taniarza, Natya; Adiwijaya; Maharani, Warih

    2018-03-01

    k-Path centrality is deemed as one of the effective methods to be applied in centrality measurement in which the influential node is estimated as the node that is being passed by information path frequently. Regarding this, k-Path centrality has been employed in the analysis of this paper specifically by adapting random-algorithm approach in order to: (1) determine the influential user’s ranking in a social media Twitter; and (2) ascertain the influence of parameter α in the numeration of k-Path centrality. According to the analysis, the findings showed that the method of k-Path centrality with random-algorithm approach can be used to determine user’s ranking which influences in the dissemination of information in Twitter. Furthermore, the findings also showed that parameter α influenced the duration and the ranking results: the less the α value, the longer the duration, yet the ranking results were more stable.

  2. Geometry of discrete quantum computing

    NASA Astrophysics Data System (ADS)

    Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr; Tai, Yu-Tsung

    2013-05-01

    Conventional quantum computing entails a geometry based on the description of an n-qubit state using 2n infinite precision complex numbers denoting a vector in a Hilbert space. Such numbers are in general uncomputable using any real-world resources, and, if we have the idea of physical law as some kind of computational algorithm of the universe, we would be compelled to alter our descriptions of physics to be consistent with computable numbers. Our purpose here is to examine the geometric implications of using finite fields Fp and finite complexified fields \\mathbf {F}_{p^2} (based on primes p congruent to 3 (mod4)) as the basis for computations in a theory of discrete quantum computing, which would therefore become a computable theory. Because the states of a discrete n-qubit system are in principle enumerable, we are able to determine the proportions of entangled and unentangled states. In particular, we extend the Hopf fibration that defines the irreducible state space of conventional continuous n-qubit theories (which is the complex projective space \\mathbf {CP}^{2^{n}-1}) to an analogous discrete geometry in which the Hopf circle for any n is found to be a discrete set of p + 1 points. The tally of unit-length n-qubit states is given, and reduced via the generalized Hopf fibration to \\mathbf {DCP}^{2^{n}-1}, the discrete analogue of the complex projective space, which has p^{2^{n}-1} (p-1)\\,\\prod _{k=1}^{n-1} ( p^{2^{k}}+1) irreducible states. Using a measure of entanglement, the purity, we explore the entanglement features of discrete quantum states and find that the n-qubit states based on the complexified field \\mathbf {F}_{p^2} have pn(p - 1)n unentangled states (the product of the tally for a single qubit) with purity 1, and they have pn + 1(p - 1)(p + 1)n - 1 maximally entangled states with purity zero.

  3. Method for transferring data from an unsecured computer to a secured computer

    DOEpatents

    Nilsen, Curt A.

    1997-01-01

    A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

  4. A transient response analysis of the space shuttle vehicle during liftoff

    NASA Technical Reports Server (NTRS)

    Brunty, J. A.

    1990-01-01

    A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.

  5. 12 CFR Appendix K to Part 1026 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. K Appendix K to Part 1026—Total Annual Loan Cost...

  6. 12 CFR Appendix K to Part 1026 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. K Appendix K to Part 1026—Total Annual Loan Cost...

  7. Mapping urban green open space in Bontang city using QGIS and cloud computing

    NASA Astrophysics Data System (ADS)

    Agus, F.; Ramadiani; Silalahi, W.; Armanda, A.; Kusnandar

    2018-04-01

    Digital mapping techniques are available freely and openly so that map-based application development is easier, faster and cheaper. A rapid development of Cloud Computing Geographic Information System makes this system can help the needs of the community for the provision of geospatial information online. The presence of urban Green Open Space (GOS) provide great benefits as an oxygen supplier, carbon-binding agent and can contribute to providing comfort and beauty of city life. This study aims to propose a platform application of GIS Cloud Computing (CC) of Bontang City GOS mapping. The GIS-CC platform uses the basic map available that’s free and open source. The research used survey method to collect GOS data obtained from Bontang City Government, while application developing works Quantum GIS-CC. The result section describes the existence of GOS Bontang City and the design of GOS mapping application.

  8. Computational methods for frictional contact with applications to the Space Shuttle orbiter nose-gear tire: Comparisons of experimental measurements and analytical predictions

    NASA Technical Reports Server (NTRS)

    Tanner, John A.

    1996-01-01

    A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical-loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.

  9. Optimum threshold selection method of centroid computation for Gaussian spot

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; Wang, Caixia

    2015-10-01

    Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.

  10. Pen-based computers: Computers without keys

    NASA Technical Reports Server (NTRS)

    Conklin, Cheryl L.

    1994-01-01

    The National Space Transportation System (NSTS) is comprised of many diverse and highly complex systems incorporating the latest technologies. Data collection associated with ground processing of the various Space Shuttle system elements is extremely challenging due to the many separate processing locations where data is generated. This presents a significant problem when the timely collection, transfer, collation, and storage of data is required. This paper describes how new technology, referred to as Pen-Based computers, is being used to transform the data collection process at Kennedy Space Center (KSC). Pen-Based computers have streamlined procedures, increased data accuracy, and now provide more complete information than previous methods. The end results is the elimination of Shuttle processing delays associated with data deficiencies.

  11. A coarse-grid-projection acceleration method for finite-element incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kashefi, Ali; Staples, Anne; FiN Lab Team

    2015-11-01

    Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.

  12. Space Suit Joint Torque Measurement Method Validation

    NASA Technical Reports Server (NTRS)

    Valish, Dana; Eversley, Karina

    2012-01-01

    In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.

  13. Absorbed dose measurements for kV-cone beam computed tomography in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Hioki, Kazunari; Araki, Fujio; Ohno, Takeshi; Nakaguchi, Yuji; Tomiyama, Yuuki

    2014-12-01

    In this study, we develope a novel method to directly evaluate an absorbed dose-to-water for kilovoltage-cone beam computed tomography (kV-CBCT) in image-guided radiation therapy (IGRT). Absorbed doses for the kV-CBCT systems of the Varian On-Board Imager (OBI) and the Elekta X-ray Volumetric Imager (XVI) were measured by a Farmer ionization chamber with a 60Co calibration factor. The chamber measurements were performed at the center and four peripheral points in body-type (30 cm diameter and 51 cm length) and head-type (16 cm diameter and 33 cm length) cylindrical water phantoms. The measured ionization was converted to the absorbed dose-to-water by using a 60Co calibration factor and a Monte Carlo (MC)-calculated beam quality conversion factor, kQ, for 60Co to kV-CBCT. The irradiation for OBI and XVI was performed with pelvis and head modes for the body- and the head-type phantoms, respectively. In addition, the dose distributions in the phantom for both kV-CBCT systems were calculated with MC method and were compared with measured values. The MC-calculated doses were calibrated at the center in the water phantom and compared with measured doses at four peripheral points. The measured absorbed doses at the center in the body-type phantom were 1.96 cGy for OBI and 0.83 cGy for XVI. The peripheral doses were 2.36-2.90 cGy for OBI and 0.83-1.06 cGy for XVI. The doses for XVI were lower up to approximately one-third of those for OBI. Similarly, the measured doses at the center in the head-type phantom were 0.48 cGy for OBI and 0.21 cGy for XVI. The peripheral doses were 0.26-0.66 cGy for OBI and 0.16-0.30 cGy for XVI. The calculated peripheral doses agreed within 3% in the pelvis mode and within 4% in the head mode with measured doses for both kV-CBCT systems. In addition, the absorbed dose determined in this study was approximately 4% lower than that in TG-61 but the absorbed dose by both methods was in agreement within their combined

  14. High End Computer Network Testbedding at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Gary, James Patrick

    1998-01-01

    The Earth & Space Data Computing (ESDC) Division, at the Goddard Space Flight Center, is involved in development and demonstrating various high end computer networking capabilities. The ESDC has several high end super computers. These are used to run: (1) computer simulation of the climate systems; (2) to support the Earth and Space Sciences (ESS) project; (3) to support the Grand Challenge (GC) Science, which is aimed at understanding the turbulent convection and dynamos in stars. GC research occurs in many sites throughout the country, and this research is enabled by, in part, the multiple high performance network interconnections. The application drivers for High End Computer Networking use distributed supercomputing to support virtual reality applications, such as TerraVision, (i.e., three dimensional browser of remotely accessed data), and Cave Automatic Virtual Environments (CAVE). Workstations can access and display data from multiple CAVE's with video servers, which allows for group/project collaborations using a combination of video, data, voice and shared white boarding. The ESDC is also developing and demonstrating the high degree of interoperability between satellite and terrestrial-based networks. To this end, the ESDC is conducting research and evaluations of new computer networking protocols and related technologies which improve the interoperability of satellite and terrestrial networks. The ESDC is also involved in the Security Proof of Concept Keystone (SPOCK) program sponsored by National Security Agency (NSA). The SPOCK activity provides a forum for government users and security technology providers to share information on security requirements, emerging technologies and new product developments. Also, the ESDC is involved in the Trans-Pacific Digital Library Experiment, which aims to demonstrate and evaluate the use of high performance satellite communications and advanced data communications protocols to enable interactive digital library data

  15. Evolutionary growth for Space Station Freedom electrical power system

    NASA Technical Reports Server (NTRS)

    Marshall, Matthew Fisk; Mclallin, Kerry; Zernic, Mike

    1989-01-01

    Over an operational lifetime of at least 30 yr, Space Station Freedom will encounter increased Space Station user requirements and advancing technologies. The Space Station electrical power system is designed with the flexibility to accommodate these emerging technologies and expert systems and is being designed with the necessary software hooks and hardware scars to accommodate increased growth demand. The electrical power system is planned to grow from the initial 75 kW up to 300 kW. The Phase 1 station will utilize photovoltaic arrays to produce the electrical power; however, for growth to 300 kW, solar dynamic power modules will be utilized. Pairs of 25 kW solar dynamic power modules will be added to the station to reach the power growth level. The addition of solar dynamic power in the growth phase places constraints in the initial Space Station systems such as guidance, navigation, and control, external thermal, truss structural stiffness, computational capabilities and storage, which must be planned-in, in order to facilitate the addition of the solar dynamic modules.

  16. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  17. Computational efficiency for the surface renewal method

    NASA Astrophysics Data System (ADS)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  18. k-space and q-space: combining ultra-high spatial and angular resolution in diffusion imaging using ZOOPPA at 7 T.

    PubMed

    Heidemann, Robin M; Anwander, Alfred; Feiweier, Thorsten; Knösche, Thomas R; Turner, Robert

    2012-04-02

    There is ongoing debate whether using a higher spatial resolution (sampling k-space) or a higher angular resolution (sampling q-space angles) is the better way to improve diffusion MRI (dMRI) based tractography results in living humans. In both cases, the limiting factor is the signal-to-noise ratio (SNR), due to the restricted acquisition time. One possible way to increase the spatial resolution without sacrificing either SNR or angular resolution is to move to a higher magnetic field strength. Nevertheless, dMRI has not been the preferred application for ultra-high field strength (7 T). This is because single-shot echo-planar imaging (EPI) has been the method of choice for human in vivo dMRI. EPI faces several challenges related to the use of a high resolution at high field strength, for example, distortions and image blurring. These problems can easily compromise the expected SNR gain with field strength. In the current study, we introduce an adapted EPI sequence in conjunction with a combination of ZOOmed imaging and Partially Parallel Acquisition (ZOOPPA). We demonstrate that the method can produce high quality diffusion-weighted images with high spatial and angular resolution at 7 T. We provide examples of in vivo human dMRI with isotropic resolutions of 1 mm and 800 μm. These data sets are particularly suitable for resolving complex and subtle fiber architectures, including fiber crossings in the white matter, anisotropy in the cortex and fibers entering the cortex. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Manyscale Computing for Sensor Processing in Support of Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.

    2014-09-01

    Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include

  20. Phonon Calculations Using the Real-Space Multigrid Method (RMG)

    NASA Astrophysics Data System (ADS)

    Zhang, Jiayong; Lu, Wenchang; Briggs, Emil; Cheng, Yongqiang; Ramirez-Cuesta, A. J.; Bernholc, Jerry

    RMG, a DFT-based open-source package using the real-space multigrid method, has proven to work effectively on large scale systems with thousands of atoms. Our recent work has shown its practicability for high accuracy phonon calculations employing the frozen phonon method. In this method, a primary unit cell with a small lattice constant is enlarged to a supercell that is sufficiently large to obtain the force constants matrix by finite displacements of atoms in the supercell. An open-source package PhonoPy is used to determine the necessary displacements by taking symmetry into account. A python script coupling RMG and PhonoPy enables us to perform high-throughput calculations of phonon properties. We have applied this method to many systems, such as silicon, silica glass, ZIF-8, etc. Results from RMG are compared to the experimental spectra measured using the VISION inelastic neutron scattering spectrometer at the Spallation Neutron Source at ORNL, as well as results from other DFT codes. The computing resources were made available through the VirtuES (Virtual Experiments in Spectroscopy) project, funded by Laboratory Directed Research and Development program (LDRD project No. 7739)

  1. Light scattering and absorption by space weathered planetary bodies: Novel numerical solution

    NASA Astrophysics Data System (ADS)

    Markkanen, Johannes; Väisänen, Timo; Penttilä, Antti; Muinonen, Karri

    2017-10-01

    Airless planetary bodies are exposed to space weathering, i.e., energetic electromagnetic and particle radiation, implantation and sputtering from solar wind particles, and micrometeorite bombardment.Space weathering is known to alter the physical and chemical composition of the surface of an airless body (C. Pieters et al., J. Geophys. Res. Planets, 121, 2016). From the light scattering perspective, one of the key effects is the production of nanophase iron (npFe0) near the exposed surfaces (B. Hapke, J. Geophys. Res., 106, E5, 2001). At visible and ultraviolet wavelengths these particles have a strong electromagnetic response which has a major impact on scattering and absorption features. Thus, to interpret the spectroscopic observations of space-weathered asteroids, the model should treat the contributions of the npFe0 particles rigorously.Our numerical approach is based on the hierarchical geometric optics (GO) and radiative transfer (RT). The modelled asteroid is assumed to consist of densely packed silicate grains with npFe0 inclusions. We employ our recently developed RT method for dense random media (K. Muinonen, et al., Radio Science, submitted, 2017) to compute the contributions of the npFe0 particles embedded in silicate grains. The dense media RT method requires computing interactions of the npFe0 particles in the volume element for which we use the exact fast superposition T-matrix method (J. Markkanen, and A.J. Yuffa, JQSRT 189, 2017). Reflections and refractions on the grain surface and propagation in the grain are addressed by the GO. Finally, the standard RT is applied to compute scattering by the entire asteroid.Our numerical method allows for a quantitative interpretation of the spectroscopic observations of space-weathered asteroids. In addition, it may be an important step towards more rigorous a thermophysical model of asteroids when coupled with the radiative and conductive heat transfer techniques.Acknowledgments. Research supported by

  2. Experience with k-epsilon turbulence models for heat transfer computations in rotating

    NASA Technical Reports Server (NTRS)

    Tekriwal, Prabbat

    1995-01-01

    This viewgraph presentation discusses geometry and flow configuration, effect of y+ on heat transfer computations, standard and extended k-epsilon turbulence model results with wall function, low-Re model results (the Lam-Bremhorst model without wall function), a criterion for flow reversal in a radially rotating square duct, and a summary.

  3. A pseudo energy-invariant method for relativistic wave equations with Riesz space-fractional derivatives

    NASA Astrophysics Data System (ADS)

    Macías-Díaz, J. E.; Hendy, A. S.; De Staelen, R. H.

    2018-03-01

    In this work, we investigate a general nonlinear wave equation with Riesz space-fractional derivatives that generalizes various classical hyperbolic models, including the sine-Gordon and the Klein-Gordon equations from relativistic quantum mechanics. A finite-difference discretization of the model is provided using fractional centered differences. The method is a technique that is capable of preserving an energy-like quantity at each iteration. Some computational comparisons against solutions available in the literature are performed in order to assess the capability of the method to preserve the invariant. Our experiments confirm that the technique yields good approximations to the solutions considered. As an application of our scheme, we provide simulations that confirm, for the first time in the literature, the presence of the phenomenon of nonlinear supratransmission in Riesz space-fractional Klein-Gordon equations driven by a harmonic perturbation at the boundary.

  4. Sub-domain methods for collaborative electromagnetic computations

    NASA Astrophysics Data System (ADS)

    Soudais, Paul; Barka, André

    2006-06-01

    In this article, we describe a sub-domain method for electromagnetic computations based on boundary element method. The benefits of the sub-domain method are that the computation can be split between several companies for collaborative studies; also the computation time can be reduced by one or more orders of magnitude especially in the context of parametric studies. The accuracy and efficiency of this technique is assessed by RCS computations on an aircraft air intake with duct and rotating engine mock-up called CHANNEL. Collaborative results, obtained by combining two sets of sub-domains computed by two companies, are compared with measurements on the CHANNEL mock-up. The comparisons are made for several angular positions of the engine to show the benefits of the method for parametric studies. We also discuss the accuracy of two formulations of the sub-domain connecting scheme using edge based or modal field expansion. To cite this article: P. Soudais, A. Barka, C. R. Physique 7 (2006).

  5. Fracture control methods for space vehicles. Volume 1: Fracture control design methods. [for space shuttle configuration planning

    NASA Technical Reports Server (NTRS)

    Liu, A. F.

    1974-01-01

    A systematic approach for applying methods for fracture control in the structural components of space vehicles consists of four major steps. The first step is to define the primary load-carrying structural elements and the type of load, environment, and design stress levels acting upon them. The second step is to identify the potential fracture-critical parts by means of a selection logic flow diagram. The third step is to evaluate the safe-life and fail-safe capabilities of the specified part. The last step in the sequence is to apply the control procedures that will prevent damage to the fracture-critical parts. The fracture control methods discussed include fatigue design and analysis methods, methods for preventing crack-like defects, fracture mechanics analysis methods, and nondestructive evaluation methods. An example problem is presented for evaluation of the safe-crack-growth capability of the space shuttle crew compartment skin structure.

  6. Investigation of a Sybr-Green-Based Method to Validate DNA Sequences for DNA Computing

    DTIC Science & Technology

    2005-05-01

    OF A SYBR-GREEN-BASED METHOD TO VALIDATE DNA SEQUENCES FOR DNA COMPUTING 6. AUTHOR(S) Wendy Pogozelski, Salvatore Priore, Matthew Bernard ...simulated annealing. Biochemistry, 35, 14077-14089. 15 Pogozelski, W.K., Bernard , M.P. and Macula, A. (2004) DNA code validation using...and Clark, B.F.C. (eds) In RNA Biochemistry and Biotechnology, NATO ASI Series, Kluwer Academic Publishers. Zucker, M. and Stiegler , P. (1981

  7. Machine learning in APOGEE. Unsupervised spectral classification with K-means

    NASA Astrophysics Data System (ADS)

    Garcia-Dias, Rafael; Allende Prieto, Carlos; Sánchez Almeida, Jorge; Ordovás-Pascual, Ignacio

    2018-05-01

    Context. The volume of data generated by astronomical surveys is growing rapidly. Traditional analysis techniques in spectroscopy either demand intensive human interaction or are computationally expensive. In this scenario, machine learning, and unsupervised clustering algorithms in particular, offer interesting alternatives. The Apache Point Observatory Galactic Evolution Experiment (APOGEE) offers a vast data set of near-infrared stellar spectra, which is perfect for testing such alternatives. Aims: Our research applies an unsupervised classification scheme based on K-means to the massive APOGEE data set. We explore whether the data are amenable to classification into discrete classes. Methods: We apply the K-means algorithm to 153 847 high resolution spectra (R ≈ 22 500). We discuss the main virtues and weaknesses of the algorithm, as well as our choice of parameters. Results: We show that a classification based on normalised spectra captures the variations in stellar atmospheric parameters, chemical abundances, and rotational velocity, among other factors. The algorithm is able to separate the bulge and halo populations, and distinguish dwarfs, sub-giants, RC, and RGB stars. However, a discrete classification in flux space does not result in a neat organisation in the parameters' space. Furthermore, the lack of obvious groups in flux space causes the results to be fairly sensitive to the initialisation, and disrupts the efficiency of commonly-used methods to select the optimal number of clusters. Our classification is publicly available, including extensive online material associated with the APOGEE Data Release 12 (DR12). Conclusions: Our description of the APOGEE database can help greatly with the identification of specific types of targets for various applications. We find a lack of obvious groups in flux space, and identify limitations of the K-means algorithm in dealing with this kind of data. Full Tables B.1-B.4 are only available at the CDS via

  8. Transfer matrix method for dynamics modeling and independent modal space vibration control design of linear hybrid multibody system

    NASA Astrophysics Data System (ADS)

    Rong, Bao; Rui, Xiaoting; Lu, Kun; Tao, Ling; Wang, Guoping; Ni, Xiaojun

    2018-05-01

    In this paper, an efficient method of dynamics modeling and vibration control design of a linear hybrid multibody system (MS) is studied based on the transfer matrix method. The natural vibration characteristics of a linear hybrid MS are solved by using low-order transfer equations. Then, by constructing the brand-new body dynamics equation, augmented operator and augmented eigenvector, the orthogonality of augmented eigenvector of a linear hybrid MS is satisfied, and its state space model expressed in each independent model space is obtained easily. According to this dynamics model, a robust independent modal space-fuzzy controller is designed for vibration control of a general MS, and the genetic optimization of some critical control parameters of fuzzy tuners is also presented. Two illustrative examples are performed, which results show that this method is computationally efficient and with perfect control performance.

  9. Pyramidal space frame and associated methods

    DOEpatents

    Clark, Ryan Michael; White, David; Farr, Jr, Adrian Lawrence

    2016-07-19

    A space frame having a high torsional strength comprising a first square bipyramid and two planar structures extending outward from an apex of the first square bipyramid to form a "V" shape is disclosed. Some embodiments comprise a plurality of edge-sharing square bipyramids configured linearly, where the two planar structures contact apexes of all the square bipyramids. A plurality of bridging struts, apex struts, corner struts and optional internal bracing struts increase the strength and rigidity of the space frame. In an embodiment, the space frame supports a solar reflector, such as a parabolic solar reflector. Methods of fabricating and using the space frames are also disclosed.

  10. Asteroid orbital inversion using uniform phase-space sampling

    NASA Astrophysics Data System (ADS)

    Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.

    2014-07-01

    We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in

  11. HAL/SM language specification. [programming languages and computer programming for space shuttles

    NASA Technical Reports Server (NTRS)

    Williams, G. P. W., Jr.; Ross, C.

    1975-01-01

    A programming language is presented for the flight software of the NASA Space Shuttle program. It is intended to satisfy virtually all of the flight software requirements of the space shuttle. To achieve this, it incorporates a wide range of features, including applications-oriented data types and organizations, real time control mechanisms, and constructs for systems programming tasks. It is a higher order language designed to allow programmers, analysts, and engineers to communicate with the computer in a form approximating natural mathematical expression. Parts of the English language are combined with standard notation to provide a tool that readily encourages programming without demanding computer hardware expertise. Block diagrams and flow charts are included. The semantics of the language is discussed.

  12. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna...

  13. Computational and mathematical methods in brain atlasing.

    PubMed

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  14. Integration K-Means Clustering Method and Elbow Method For Identification of The Best Customer Profile Cluster

    NASA Astrophysics Data System (ADS)

    Syakur, M. A.; Khotimah, B. K.; Rochman, E. M. S.; Satoto, B. D.

    2018-04-01

    Clustering is a data mining technique used to analyse data that has variations and the number of lots. Clustering was process of grouping data into a cluster, so they contained data that is as similar as possible and different from other cluster objects. SMEs Indonesia has a variety of customers, but SMEs do not have the mapping of these customers so they did not know which customers are loyal or otherwise. Customer mapping is a grouping of customer profiling to facilitate analysis and policy of SMEs in the production of goods, especially batik sales. Researchers will use a combination of K-Means method with elbow to improve efficient and effective k-means performance in processing large amounts of data. K-Means Clustering is a localized optimization method that is sensitive to the selection of the starting position from the midpoint of the cluster. So choosing the starting position from the midpoint of a bad cluster will result in K-Means Clustering algorithm resulting in high errors and poor cluster results. The K-means algorithm has problems in determining the best number of clusters. So Elbow looks for the best number of clusters on the K-means method. Based on the results obtained from the process in determining the best number of clusters with elbow method can produce the same number of clusters K on the amount of different data. The result of determining the best number of clusters with elbow method will be the default for characteristic process based on case study. Measurement of k-means value of k-means has resulted in the best clusters based on SSE values on 500 clusters of batik visitors. The result shows the cluster has a sharp decrease is at K = 3, so K as the cut-off point as the best cluster.

  15. Theoretical and numerical difficulties in 3-D vector potential methods in finite element magnetostatic computations

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Wang, R.

    1990-01-01

    This paper describes the results of application of three well known 3D magnetic vector potential (MVP) based finite element formulations for computation of magnetostatic fields in electrical devices. The three methods were identically applied to three practical examples, the first of which contains only one medium (free space), while the second and third examples contained a mix of free space and iron. The first of these methods is based on the unconstrained curl-curl of the MVP, while the second and third methods are predicated upon constraining the divergence of the MVP 10 zero (Coulomb's Gauge). It was found that the two latter methods cease to give useful and meaningful results when the global solution region contains a mix of media of high and low permeabilities. Furthermore, it was found that their results do not achieve the intended zero constraint on the divergence of the MVP.

  16. Los Alamos Space Weather Summer School: Institutional Computing 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowee, Misa

    During the summer school, students carry out independent research projects on a range of topics related to space weather. In 2016, one student used the LANL Institutional Computing resources. Results of this project were the first to demonstrate that the magnitude of radial diffusion is found to agree well with the early observations of the Earth's radiation belts, indicating this effect should be included in community models of the radiation belts.

  17. RRAM-based parallel computing architecture using k-nearest neighbor classification for pattern recognition

    NASA Astrophysics Data System (ADS)

    Jiang, Yuning; Kang, Jinfeng; Wang, Xinan

    2017-03-01

    Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today’s electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.

  18. Prefixed-threshold real-time selection method in free-space quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Wenyuan; Xu, Feihu; Lo, Hoi-Kwong

    2018-03-01

    Free-space quantum key distribution allows two parties to share a random key with unconditional security, between ground stations, between mobile platforms, and even in satellite-ground quantum communications. Atmospheric turbulence causes fluctuations in transmittance, which further affect the quantum bit error rate and the secure key rate. Previous postselection methods to combat atmospheric turbulence require a threshold value determined after all quantum transmission. In contrast, here we propose a method where we predetermine the optimal threshold value even before quantum transmission. Therefore, the receiver can discard useless data immediately, thus greatly reducing data storage requirements and computing resources. Furthermore, our method can be applied to a variety of protocols, including, for example, not only single-photon BB84 but also asymptotic and finite-size decoy-state BB84, which can greatly increase its practicality.

  19. Development of dynamic calibration methods for POGO pressure transducers. [for space shuttle

    NASA Technical Reports Server (NTRS)

    Hilten, J. S.; Lederer, P. S.; Vezzetti, C. F.; Mayo-Wells, J. F.

    1976-01-01

    Two dynamic pressure sources are described for the calibration of pogo pressure transducers used to measure oscillatory pressures generated in the propulsion system of the space shuttle. Rotation of a mercury-filled tube in a vertical plane at frequencies below 5 Hz generates sinusoidal pressures up to 48 kPa, peak-to-peak; vibrating the same mercury-filled tube sinusoidally in the vertical plane extends the frequency response from 5 Hz to 100 Hz at pressures up to 140 kPa, peak-to-peak. The sinusoidal pressure fluctuations can be generated by both methods in the presence of high pressures (bias) up to 55 MPa. Calibration procedures are given in detail for the use of both sources. The dynamic performance of selected transducers was evaluated using these procedures; the results of these calibrations are presented. Calibrations made with the two sources near 5 Hz agree to within 3% of each other.

  20. Computational methods for optimal linear-quadratic compensators for infinite dimensional discrete-time systems

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation theory and computational methods are developed for the determination of optimal linear-quadratic feedback control, observers and compensators for infinite dimensional discrete-time systems. Particular attention is paid to systems whose open-loop dynamics are described by semigroups of operators on Hilbert spaces. The approach taken is based on the finite dimensional approximation of the infinite dimensional operator Riccati equations which characterize the optimal feedback control and observer gains. Theoretical convergence results are presented and discussed. Numerical results for an example involving a heat equation with boundary control are presented and used to demonstrate the feasibility of the method.

  1. Optimal shield mass distribution for space radiation protection

    NASA Technical Reports Server (NTRS)

    Billings, M. P.

    1972-01-01

    Computational methods have been developed and successfully used for determining the optimum distribution of space radiation shielding on geometrically complex space vehicles. These methods have been incorporated in computer program SWORD for dose evaluation in complex geometry, and iteratively calculating the optimum distribution for (minimum) shield mass satisfying multiple acute and protected dose constraints associated with each of several body organs.

  2. Some key considerations in evolving a computer system and software engineering support environment for the space station program

    NASA Technical Reports Server (NTRS)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.

  3. Y2K+1: Technology, Community-College Students, the Millennium, and Stanley Kubrick's "2001: A Space Odyssey."

    ERIC Educational Resources Information Center

    Haspel, Paul

    2002-01-01

    Considers how screening Stanley Kubrick's "2001: A Space Odyssey" in a sophomore film class shows modern community-college students that millennial anxiety existed well before late 1999, the time of "Y2K" fears. Presents an assignment that examines "2001: A Space Odyssey" in the context of its time and in 2001. (SG)

  4. Computing the modal mass from the state space model in combined experimental-operational modal analysis

    NASA Astrophysics Data System (ADS)

    Cara, Javier

    2016-05-01

    Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.

  5. Computer-aided controllability assessment of generic manned Space Station concepts

    NASA Technical Reports Server (NTRS)

    Ferebee, M. J.; Deryder, L. J.; Heck, M. L.

    1984-01-01

    NASA's Concept Development Group assessment methodology for the on-orbit rigid body controllability characteristics of each generic configuration proposed for the manned space station is presented; the preliminary results obtained represent the first step in the analysis of these eight configurations. Analytical computer models of each configuration were developed by means of the Interactive Design Evaluation of Advanced Spacecraft CAD system, which created three-dimensional geometry models of each configuration to establish dimensional requirements for module connectivity, payload accommodation, and Space Shuttle berthing; mass, center-of-gravity, inertia, and aerodynamic drag areas were then derived. Attention was also given to the preferred flight attitude of each station concept.

  6. Connecting the Pioneers, Current Leaders and the Nature and History of Space Weather with K-12 Classrooms and the General Public

    NASA Astrophysics Data System (ADS)

    Ng, C.; Thompson, B. J.; Cline, T.; Lewis, E.; Barbier, B.; Odenwald, S.; Spadaccini, J.; James, N.; Stephenson, B.; Davis, H. B.; Major, E. R.; Space Weather Living History

    2011-12-01

    The Space Weather Living History program will explore and share the breakthrough new science and captivating stories of space environments and space weather by interviewing space physics pioneers and leaders active from the International Geophysical Year (IGY) to the present. Our multi-mission project will capture, document and preserve the living history of space weather utilizing original historical materials (primary sources). The resulting products will allow us to tell the stories of those involved in interactive new media to address important STEM needs, inspire the next generation of explorers, and feature women as role models. The project is divided into several stages, and the first stage, which began in mid-2011, focuses on resource gathering. The goal is to capture not just anecdotes, but the careful analogies and insights of researchers and historians associated with the programs and events. The Space Weather Living History Program has a Scientific Advisory Board, and with the Board's input our team will determine the chronology, key researchers, events, missions and discoveries for interviews. Education activities will be designed to utilize autobiographies, newspapers, interviews, research reports, journal articles, conference proceedings, dissertations, websites, diaries, letters, and artworks. With the help of a multimedia firm, we will use some of these materials to develop an interactive timeline on the web, and as a downloadable application in a kiosk and on tablet computers. In summary, our project augments the existing historical records with education technologies, connect the pioneers, current leaders and the nature and history of space weather with K-12 classrooms and the general public, covering all areas of studies in Heliophysics. The project is supported by NASA award NNX11AJ61G.

  7. k-t accelerated aortic 4D flow MRI in under two minutes: Feasibility and impact of resolution, k-space sampling patterns, and respiratory navigator gating on hemodynamic measurements.

    PubMed

    Bollache, Emilie; Barker, Alex J; Dolan, Ryan Scott; Carr, James C; van Ooij, Pim; Ahmadian, Rouzbeh; Powell, Alex; Collins, Jeremy D; Geiger, Julia; Markl, Michael

    2018-01-01

    To assess the performance of highly accelerated free-breathing aortic four-dimensional (4D) flow MRI acquired in under 2 minutes compared to conventional respiratory gated 4D flow. Eight k-t accelerated nongated 4D flow MRI (parallel MRI with extended and averaged generalized autocalibrating partially parallel acquisition kernels [PEAK GRAPPA], R = 5, TRes = 67.2 ms) using four k y -k z Cartesian sampling patterns (linear, center-out, out-center-out, random) and two spatial resolutions (SRes1 = 3.5 × 2.3 × 2.6 mm 3 , SRes2 = 4.5 × 2.3 × 2.6 mm 3 ) were compared in vitro (aortic coarctation flow phantom) and in 10 healthy volunteers, to conventional 4D flow (16 mm-navigator acceptance window; R = 2; TRes = 39.2 ms; SRes = 3.2 × 2.3 × 2.4 mm 3 ). The best k-t accelerated approach was further assessed in 10 patients with aortic disease. The k-t accelerated in vitro aortic peak flow (Qmax), net flow (Qnet), and peak velocity (Vmax) were lower than conventional 4D flow indices by ≤4.7%, ≤ 11%, and ≤22%, respectively. In vivo k-t accelerated acquisitions were significantly shorter but showed a trend to lower image quality compared to conventional 4D flow. Hemodynamic indices for linear and out-center-out k-space samplings were in agreement with conventional 4D flow (Qmax ≤ 13%, Qnet ≤ 13%, Vmax ≤ 17%, P > 0.05). Aortic 4D flow MRI in under 2 minutes is feasible with moderate underestimation of flow indices. Differences in k-space sampling patterns suggest an opportunity to mitigate image artifacts by an optimal trade-off between scan time, acceleration, and k-space sampling. Magn Reson Med 79:195-207, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. A New Method for Breath Capture Inside a Space Suit Helmet

    NASA Technical Reports Server (NTRS)

    Filburn, Tom; Dolder, Craig; Tufano, Brett; Paul, Heather L.

    2007-01-01

    This project investigates methods to capture an astronaut's exhaled carbon dioxide (CO2) before it becomes diluted with the high volumetric oxygen flow present within a space suit. Typical expired breath contains CO2 partial pressures (pCO2) in the range of 20-35 mm Hg. This research investigates methods to capture the concentrated CO2 gas stream prior to its dilution with the low pCO2 ventilation flow. Specifically this research is looking at potential designs for a collection cup for use inside the space suit helmet. The collection cup concept is not the same as a breathing mask typical of that worn by firefighters and pilots. It is well known that most members of the astronaut corps view a mask as a serious deficiency in any space suit helmet design. Instead, the collection cup is a non-contact device that will be designed using a detailed Computational Fluid Dynamic (CFD) analysis of the ventilation flow environment within the helmet. The CFD code, Fluent, provides modeling of the various gas species (CO2, water vapor, and oxygen (O2)) as they pass through a helmet. This same model will be used to numerically evaluate several different collection cup designs for this same CO2 segregation effort. A new test rig will be built to test the results of the CFD analyses and validate the collection cup designs. This paper outlines the initial results and future plans of this work.

  9. Exploration of a Capability-Focused Aerospace System of Systems Architecture Alternative with Bilayer Design Space, Based on RST-SOM Algorithmic Methods

    PubMed Central

    Li, Zhifei; Qin, Dongliang

    2014-01-01

    In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation. PMID:24790572

  10. Exploration of a capability-focused aerospace system of systems architecture alternative with bilayer design space, based on RST-SOM algorithmic methods.

    PubMed

    Li, Zhifei; Qin, Dongliang; Yang, Feng

    2014-01-01

    In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.

  11. Large-Scale Computation of Nuclear Magnetic Resonance Shifts for Paramagnetic Solids Using CP2K.

    PubMed

    Mondal, Arobendo; Gaultois, Michael W; Pell, Andrew J; Iannuzzi, Marcella; Grey, Clare P; Hutter, Jürg; Kaupp, Martin

    2018-01-09

    Large-scale computations of nuclear magnetic resonance (NMR) shifts for extended paramagnetic solids (pNMR) are reported using the highly efficient Gaussian-augmented plane-wave implementation of the CP2K code. Combining hyperfine couplings obtained with hybrid functionals with g-tensors and orbital shieldings computed using gradient-corrected functionals, contact, pseudocontact, and orbital-shift contributions to pNMR shifts are accessible. Due to the efficient and highly parallel performance of CP2K, a wide variety of materials with large unit cells can be studied with extended Gaussian basis sets. Validation of various approaches for the different contributions to pNMR shifts is done first for molecules in a large supercell in comparison with typical quantum-chemical codes. This is then extended to a detailed study of g-tensors for extended solid transition-metal fluorides and for a series of complex lithium vanadium phosphates. Finally, lithium pNMR shifts are computed for Li 3 V 2 (PO 4 ) 3 , for which detailed experimental data are available. This has allowed an in-depth study of different approaches (e.g., full periodic versus incremental cluster computations of g-tensors and different functionals and basis sets for hyperfine computations) as well as a thorough analysis of the different contributions to the pNMR shifts. This study paves the way for a more-widespread computational treatment of NMR shifts for paramagnetic materials.

  12. Space and Earth Sciences, Computer Systems, and Scientific Data Analysis Support, Volume 1

    NASA Technical Reports Server (NTRS)

    Estes, Ronald H. (Editor)

    1993-01-01

    This Final Progress Report covers the specific technical activities of Hughes STX Corporation for the last contract triannual period of 1 June through 30 Sep. 1993, in support of assigned task activities at Goddard Space Flight Center (GSFC). It also provides a brief summary of work throughout the contract period of performance on each active task. Technical activity is presented in Volume 1, while financial and level-of-effort data is presented in Volume 2. Technical support was provided to all Division and Laboratories of Goddard's Space Sciences and Earth Sciences Directorates. Types of support include: scientific programming, systems programming, computer management, mission planning, scientific investigation, data analysis, data processing, data base creation and maintenance, instrumentation development, and management services. Mission and instruments supported include: ROSAT, Astro-D, BBXRT, XTE, AXAF, GRO, COBE, WIND, UIT, SMM, STIS, HEIDI, DE, URAP, CRRES, Voyagers, ISEE, San Marco, LAGEOS, TOPEX/Poseidon, Pioneer-Venus, Galileo, Cassini, Nimbus-7/TOMS, Meteor-3/TOMS, FIFE, BOREAS, TRMM, AVHRR, and Landsat. Accomplishments include: development of computing programs for mission science and data analysis, supercomputer applications support, computer network support, computational upgrades for data archival and analysis centers, end-to-end management for mission data flow, scientific modeling and results in the fields of space and Earth physics, planning and design of GSFC VO DAAC and VO IMS, fabrication, assembly, and testing of mission instrumentation, and design of mission operations center.

  13. Experimentation Using the Mir Station as a Space Laboratory

    DTIC Science & Technology

    1998-01-01

    Institute for Machine Building (TsNIIMASH) Korolev, Moscow Region, Russia V. Teslenko and N. Shvets Energia Space Corporation Korolev, Moscow...N. Shvets Energia Space Corporation Korolev, Moscow Region, Russia J. A. Drakes/ D. G. Swann, and W. K. McGregor* Sverdrup Technology, Inc...and plume computations. Excitation of the plume gas molecular electronic states by solar radiation, geo- corona Lyman-alpha, and electronic impact

  14. Computing discharge using the index velocity method

    USGS Publications Warehouse

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  15. Transport of Space Environment Electrons: A Simplified Rapid-Analysis Computational Procedure

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Anderson, Brooke M.; Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Chang, C. K.

    2002-01-01

    A computational procedure for describing transport of electrons in condensed media has been formulated for application to effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The procedure is based on earlier parameterizations established from numerous electron beam experiments. New parameterizations have been derived that logically extend the domain of application to low molecular weight (high hydrogen content) materials and higher energies (approximately 50 MeV). The production and transport of high energy photons (bremsstrahlung) generated in the electron transport processes have also been modeled using tabulated values of photon production cross sections. A primary purpose for developing the procedure has been to provide a means for rapidly performing numerous repetitive calculations essential for electron radiation exposure assessments for complex space structures. Several favorable comparisons have been made with previous calculations for typical space environment spectra, which have indicated that accuracy has not been substantially compromised at the expense of computational speed.

  16. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  17. Materials technology assessment for a 1050 K Stirling space engine design

    NASA Technical Reports Server (NTRS)

    Scheuermann, Coulson M.; Dreshfield, Robert L.; Gaydosh, Darrell J.; Kiser, James D.; Mackay, Rebecca A.; Mcdaniels, David L.; Petrasek, Donald W.; Vannucci, Raymond D.; Bowles, Kenneth J.; Watson, Gordon K.

    1988-01-01

    An assessment of materials technology and proposed materials selection was made for the 1050 K (superalloy) Stirling Space Engine design. The objectives of this assessment were to evaluate previously proposed materials selections, evaluate the current state-of-the-art materials, propose potential alternate materials selections and identify research and development efforts needed to provide materials that can meet the stringent system requirements. This assessment generally reaffirmed the choices made by the contractor. However, in many cases alternative choices were described and suggestions for needed materials and fabrication research and development were made.

  18. Space-Time Fluid-Structure Interaction Computation of Flapping-Wing Aerodynamics

    DTIC Science & Technology

    2013-12-01

    SST-VMST." The structural mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is...mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is ap- plicable to some classes of FSI...we use the ST-VMS method in combination with the ST-SUPS method. The structural mechanics computations are mostly based on the Kirchhoff –Love shell

  19. Dynamical electron diffraction simulation for non-orthogonal crystal system by a revised real space method.

    PubMed

    Lv, C L; Liu, Q B; Cai, C Y; Huang, J; Zhou, G W; Wang, Y G

    2015-01-01

    In the transmission electron microscopy, a revised real space (RRS) method has been confirmed to be a more accurate dynamical electron diffraction simulation method for low-energy electron diffraction than the conventional multislice method (CMS). However, the RRS method can be only used to calculate the dynamical electron diffraction of orthogonal crystal system. In this work, the expression of the RRS method for non-orthogonal crystal system is derived. By taking Na2 Ti3 O7 and Si as examples, the correctness of the derived RRS formula for non-orthogonal crystal system is confirmed by testing the coincidence of numerical results of both sides of Schrödinger equation; moreover, the difference between the RRS method and the CMS for non-orthogonal crystal system is compared at the accelerating voltage range from 40 to 10 kV. Our results show that the CMS method is almost the same as the RRS method for the accelerating voltage above 40 kV. However, when the accelerating voltage is further lowered to 20 kV or below, the CMS method introduces significant errors, not only for the higher-order Laue zone diffractions, but also for zero-order Laue zone. These indicate that the RRS method for non-orthogonal crystal system is necessary to be used for more accurate dynamical simulation when the accelerating voltage is low. Furthermore, the reason for the increase of differences between those diffraction patterns calculated by the RRS method and the CMS method with the decrease of the accelerating voltage is discussed. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  20. PathoSpotter-K: A computational tool for the automatic identification of glomerular lesions in histological images of kidneys

    NASA Astrophysics Data System (ADS)

    Barros, George O.; Navarro, Brenda; Duarte, Angelo; Dos-Santos, Washington L. C.

    2017-04-01

    PathoSpotter is a computational system designed to assist pathologists in teaching about and researching kidney diseases. PathoSpotter-K is the version that was developed to detect nephrological lesions in digital images of kidneys. Here, we present the results obtained using the first version of PathoSpotter-K, which uses classical image processing and pattern recognition methods to detect proliferative glomerular lesions with an accuracy of 88.3 ± 3.6%. Such performance is only achieved by similar systems if they use images of cell in contexts that are much less complex than the glomerular structure. The results indicate that the approach can be applied to the development of systems designed to train pathology students and to assist pathologists in determining large-scale clinicopathological correlations in morphological research.

  1. Modeling open nanophotonic systems using the Fourier modal method: generalization to 3D Cartesian coordinates.

    PubMed

    Häyrynen, Teppo; Osterkryger, Andreas Dyhl; de Lasson, Jakob Rosenkrantz; Gregersen, Niels

    2017-09-01

    Recently, an open geometry Fourier modal method based on a new combination of an open boundary condition and a non-uniform k-space discretization was introduced for rotationally symmetric structures, providing a more efficient approach for modeling nanowires and micropillar cavities [J. Opt. Soc. Am. A33, 1298 (2016)JOAOD61084-752910.1364/JOSAA.33.001298]. Here, we generalize the approach to three-dimensional (3D) Cartesian coordinates, allowing for the modeling of rectangular geometries in open space. The open boundary condition is a consequence of having an infinite computational domain described using basis functions that expand the whole space. The strength of the method lies in discretizing the Fourier integrals using a non-uniform circular "dartboard" sampling of the Fourier k space. We show that our sampling technique leads to a more accurate description of the continuum of the radiation modes that leak out from the structure. We also compare our approach to conventional discretization with direct and inverse factorization rules commonly used in established Fourier modal methods. We apply our method to a variety of optical waveguide structures and demonstrate that the method leads to a significantly improved convergence, enabling more accurate and efficient modeling of open 3D nanophotonic structures.

  2. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  3. Explicit methods in extended phase space for inseparable Hamiltonian problems

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2015-03-01

    We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.

  4. A facility for testing 10 to 100-kWe space power reactors

    NASA Astrophysics Data System (ADS)

    Carlson, William F.; Bitten, Ernest J.

    1993-01-01

    This paper describes an existing facility that could be used in a cost-effective manner to test space power reactors in the 10 to 100-kWe range before launch. The facility has been designed to conduct full power tests of 100-kWe SP-100 reactor systems and already has the structural features that would be required for lower power testing. The paper describes a reasonable scenario starting with the acceptance at the test site of the unfueled reactor assembly and the separately shipped nuclear fuel. After fueling the reactor and installing it in the facility, cold critical tests are performed, and the reactor is then shipped to the launch site. The availability of this facility represents a cost-effective means of performing the required prelaunch test program.

  5. Daniel K. Inouye Solar Telescope: computational fluid dynamic analyses and evaluation of the air knife model

    NASA Astrophysics Data System (ADS)

    McQuillen, Isaac; Phelps, LeEllen; Warner, Mark; Hubbard, Robert

    2016-08-01

    Implementation of an air curtain at the thermal boundary between conditioned and ambient spaces allows for observation over wavelength ranges not practical when using optical glass as a window. The air knife model of the Daniel K. Inouye Solar Telescope (DKIST) project, a 4-meter solar observatory that will be built on Haleakalā, Hawai'i, deploys such an air curtain while also supplying ventilation through the ceiling of the coudé laboratory. The findings of computational fluid dynamics (CFD) analysis and subsequent changes to the air knife model are presented. Major design constraints include adherence to the Interface Control Document (ICD), separation of ambient and conditioned air, unidirectional outflow into the coudé laboratory, integration of a deployable glass window, and maintenance and accessibility requirements. Optimized design of the air knife successfully holds full 12 Pa backpressure under temperature gradients of up to 20°C while maintaining unidirectional outflow. This is a significant improvement upon the .25 Pa pressure differential that the initial configuration, tested by Linden and Phelps, indicated the curtain could hold. CFD post- processing, developed by Vogiatzis, is validated against interferometry results of initial air knife seeing evaluation, performed by Hubbard and Schoening. This is done by developing a CFD simulation of the initial experiment and using Vogiatzis' method to calculate error introduced along the optical path. Seeing error, for both temperature differentials tested in the initial experiment, match well with seeing results obtained from the CFD analysis and thus validate the post-processing model. Application of this model to the realizable air knife assembly yields seeing errors that are well within the error budget under which the air knife interface falls, even with a temperature differential of 20°C between laboratory and ambient spaces. With ambient temperature set to 0°C and conditioned temperature set to 20

  6. The two-dimensional tunnel structures of K3Sb5O14 and K2Sb4O11

    NASA Technical Reports Server (NTRS)

    Hong, H. Y.-P.

    1974-01-01

    The structures of K3Sb5O14 and K2Sb4O11 have been solved by the single-crystal X-ray direct method and the heavy-atom method, respectively. The structure of K3Sb5O14 is orthorhombic, with space group Pbam and cell parameters a = 24.247 (4), b = 7.157 (2), c = 7.334 (2) A, Z = 4. The structure of K2Sb4O11 is monoclinic, with space group C2/m and cell parameters a = 19.473 (4), b = 7.542 (1), c = 7.198 (1) A, beta = 94.82 (2) deg, Z = 4. A full-matrix least-squares refinement gave R = 0.072 and R = 0.067, respectively. In both structures, oxygen atoms form an octahedron around each Sb atom and an irregular polyhedron around each K atom. By sharing corners and edges, the octahedra form a skeleton network having intersecting b-axis and c-axis tunnels. The K(+) ions, which have more than ten oxygen near neighbors, are located in these tunnels. Evidence for K(+)-ion transport within and between tunnels comes from ion exchange of the alkali ions in molten salts and anisotropic temperature factors that are anomalously large in the direction of the tunnels.

  7. Functional requirements for design of the Space Ultrareliable Modular Computer (SUMC) system simulator

    NASA Technical Reports Server (NTRS)

    Curran, R. T.; Hornfeck, W. A.

    1972-01-01

    The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.

  8. Systems and methods for free space optical communication

    DOEpatents

    Harper, Warren W [Benton City, WA; Aker, Pamela M [Richland, WA; Pratt, Richard M [Richland, WA

    2011-05-10

    Free space optical communication methods and systems, according to various aspects are described. The methods and systems are characterized by transmission of data through free space with a digitized optical signal acquired using wavelength modulation, and by discrimination between bit states in the digitized optical signal using a spectroscopic absorption feature of a chemical substance.

  9. A review of Computer Science resources for learning and teaching with K-12 computing curricula: an Australian case study

    NASA Astrophysics Data System (ADS)

    Falkner, Katrina; Vivian, Rebecca

    2015-10-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.

  10. Characterization of Space Shuttle Ascent Debris Aerodynamics Using CFD Methods

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Rogers, Stuart E.

    2005-01-01

    An automated Computational Fluid Dynamics process for determining the aerodynamic Characteristics of debris shedding from the Space Shuttle Launch Vehicle during ascent is presented. This process uses Cartesian fully-coupled, six-degree-of-freedom simulations of isolated debris pieces in a Monte Carlo fashion to produce models for the drag and crossrange behavior over a range of debris shapes and shedding scenarios. A validation of the Cartesian methods against ballistic range data for insulating foam debris shapes at flight conditions, as well as validation of the resulting models, are both contained. These models are integrated with the existing shuttle debris transport analysis software to provide an accurate and efficient engineering tool for analyzing debris sources and their potential for damage.

  11. Data systems and computer science space data systems: Onboard networking and testbeds

    NASA Technical Reports Server (NTRS)

    Dalton, Dan

    1991-01-01

    The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: justification; technology challenges; program description; and state-of-the-art assessment.

  12. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  13. Scaling Task Management in Space and Time: Reducing User Overhead in Ubiquitous-Computing Environments

    DTIC Science & Technology

    2005-03-28

    consequently users are torn between taking advantage of increasingly pervasive computing systems, and the price (in attention and skill) that they have to... advantage of the surrounding computing environments; and (c) that it is usable by non-experts. Second, from a software architect’s perspective, we...take full advantage of the computing systems accessible to them, much as they take advantage of the furniture in each physical space. In the example

  14. Performance Analysis of Entropy Methods on K Means in Clustering Process

    NASA Astrophysics Data System (ADS)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  15. Computing the Dynamic Response of a Stratified Elastic Half Space Using Diffuse Field Theory

    NASA Astrophysics Data System (ADS)

    Sanchez-Sesma, F. J.; Perton, M.; Molina Villegas, J. C.

    2015-12-01

    The analytical solution for the dynamic response of an elastic half-space for a normal point load at the free surface is due to Lamb (1904). For a tangential force, we have Chaós (1960) formulae. For an arbitrary load at any depth within a stratified elastic half space, the resulting elastic field can be given in the same fashion, by using an integral representation in the radial wavenumber domain. Typically, computations use discrete wave number (DWN) formalism and Fourier analysis allows for solution in space and time domain. Experimentally, these elastic Greeńs functions might be retrieved from ambient vibrations correlations when assuming a diffuse field. In fact, the field could not be totally diffuse and only parts of the Green's functions, associated to surface or body waves, are retrieved. In this communication, we explore the computation of Green functions for a layered media on top of a half-space using a set of equipartitioned elastic plane waves. Our formalism includes body and surface waves (Rayleigh and Love waves). These latter waves correspond to the classical representations in terms of normal modes in the asymptotic case of large separation distance between source and receiver. This approach allows computing Green's functions faster than DWN and separating the surface and body wave contributions in order to better represent Green's function experimentally retrieved.

  16. Space Archaeology: Attribute, Object, Task and Method

    NASA Astrophysics Data System (ADS)

    Wang, Xinyuan; Guo, Huadong; Luo, Lei; Liu, Chuansheng

    2017-04-01

    Archaeology takes the material remains of human activity as the research object, and uses those fragmentary remains to reconstruct the humanistic and natural environment in different historical periods. Space Archaeology is a new branch of the Archaeology. Its study object is the humanistic-natural complex including the remains of human activities and living environments on the earth surface. The research method, space information technologies applied to this complex, is an innovative process concerning archaeological information acquisition, interpretation and reconstruction, and to achieve the 3-D dynamic reconstruction of cultural heritages by constructing the digital cultural-heritage sphere. Space archaeology's attribute is highly interdisciplinary linking several areas of natural and social and humanities. Its task is to reveal the history, characteristics, and patterns of human activities in the past, as well as to understand the evolutionary processes guiding the relationship between human and their environment. This paper summarizes six important aspects of space archaeology and five crucial recommendations for the establishment and development of this new discipline. The six important aspects are: (1) technologies and methods for non-destructive detection of archaeological sites; (2) space technologies for the protection and monitoring of cultural heritages; (3) digital environmental reconstruction of archaeological sites; (4) spatial data storage and data mining of cultural heritages; (5) virtual archaeology, digital reproduction and public information and presentation system; and (6) the construction of scientific platform of digital cultural-heritage sphere. The five key recommendations for establishing the discipline of Space Archaeology are: (1) encouraging the full integration of the strengths of both archaeology and museology with space technology to promote the development of space technologies' application for cultural heritages; (2) a new

  17. An evaluation method of computer usability based on human-to-computer information transmission model.

    PubMed

    Ogawa, K

    1992-01-01

    This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.

  18. Penalized Ordinal Regression Methods for Predicting Stage of Cancer in High-Dimensional Covariate Spaces.

    PubMed

    Gentry, Amanda Elswick; Jackson-Cook, Colleen K; Lyon, Debra E; Archer, Kellie J

    2015-01-01

    The pathological description of the stage of a tumor is an important clinical designation and is considered, like many other forms of biomedical data, an ordinal outcome. Currently, statistical methods for predicting an ordinal outcome using clinical, demographic, and high-dimensional correlated features are lacking. In this paper, we propose a method that fits an ordinal response model to predict an ordinal outcome for high-dimensional covariate spaces. Our method penalizes some covariates (high-throughput genomic features) without penalizing others (such as demographic and/or clinical covariates). We demonstrate the application of our method to predict the stage of breast cancer. In our model, breast cancer subtype is a nonpenalized predictor, and CpG site methylation values from the Illumina Human Methylation 450K assay are penalized predictors. The method has been made available in the ordinalgmifs package in the R programming environment.

  19. Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.

    PubMed

    Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon

    2009-01-01

    Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.

  20. A Parametric k-Means Algorithm

    PubMed Central

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  1. Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing

    NASA Technical Reports Server (NTRS)

    Doyle, Richard; Bergman, Larry; Some, Raphael; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael

    2013-01-01

    Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and the mission; it can be aptly viewed as a "technology multiplier" in that advances in onboard computing provide dramatic improvements in flight functions and capabilities across the NASA mission classes, and will enable new flight capabilities and mission scenarios, increasing science and exploration return per mission-dollar.

  2. Computer Technology in California K-12 Schools: Uses, Best Practices, and Policy Implications.

    ERIC Educational Resources Information Center

    Umbach, Kenneth W.

    Computers and Internet access are becoming increasingly frequent tools and resources in California's K-12 schools. Discussions with teachers and other education personnel and a review of published documents and other sources show the range of uses found in California classrooms, suggest what are the best practices with respect to computer…

  3. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  4. Computational Assay of H7N9 Influenza Neuraminidase Reveals R292K Mutation Reduces Drug Binding Affinity

    NASA Astrophysics Data System (ADS)

    Woods, Christopher J.; Malaisree, Maturos; Long, Ben; McIntosh-Smith, Simon; Mulholland, Adrian J.

    2013-12-01

    The emergence of a novel H7N9 avian influenza that infects humans is a serious cause for concern. Of the genome sequences of H7N9 neuraminidase available, one contains a substitution of arginine to lysine at position 292, suggesting a potential for reduced drug binding efficacy. We have performed molecular dynamics simulations of oseltamivir, zanamivir and peramivir bound to H7N9, H7N9-R292K, and a structurally related H11N9 neuraminidase. They show that H7N9 neuraminidase is structurally homologous to H11N9, binding the drugs in identical modes. The simulations reveal that the R292K mutation disrupts drug binding in H7N9 in a comparable manner to that observed experimentally for H11N9-R292K. Absolute binding free energy calculations with the WaterSwap method confirm a reduction in binding affinity. This indicates that the efficacy of antiviral drugs against H7N9-R292K will be reduced. Simulations can assist in predicting disruption of binding caused by mutations in neuraminidase, thereby providing a computational `assay.'

  5. Increasing signal-to-noise ratio of swept-source optical coherence tomography by oversampling in k-space

    NASA Astrophysics Data System (ADS)

    Nagib, Karim; Mezgebo, Biniyam; Thakur, Rahul; Fernando, Namal; Kordi, Behzad; Sherif, Sherif

    2018-03-01

    Optical coherence tomography systems suffer from noise that could reduce ability to interpret reconstructed images correctly. We describe a method to increase the signal-to-noise ratio of swept-source optical coherence tomography (SSOCT) using oversampling in k-space. Due to this oversampling, information redundancy would be introduced in the measured interferogram that could be used to reduce white noise in the reconstructed A-scan. We applied our novel scaled nonuniform discrete Fourier transform to oversampled SS-OCT interferograms to reconstruct images of a salamander egg. The peak-signal-to-noise (PSNR) between the reconstructed images using interferograms sampled at 250MS/s andz50MS/s demonstrate that this oversampling increased the signal-to-noise ratio by 25.22 dB.

  6. Planned development of a 3D computer based on free-space optical interconnects

    NASA Astrophysics Data System (ADS)

    Neff, John A.; Guarino, David R.

    1994-05-01

    Free-space optical interconnection has the potential to provide upwards of a million data channels between planes of electronic circuits. This may result in the planar board and backplane structures of today giving away to 3-D stacks of wafers or multi-chip modules interconnected via channels running perpendicular to the processor planes, thereby eliminating much of the packaging overhead. Three-dimensional packaging is very appealing for tightly coupled fine-grained parallel computing where the need for massive numbers of interconnections is severely taxing the capabilities of the planar structures. This paper describes a coordinated effort by four research organizations to demonstrate an operational fine-grained parallel computer that achieves global connectivity through the use of free space optical interconnects.

  7. Optical Interconnection Via Computer-Generated Holograms

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Zhou, Shaomin

    1995-01-01

    Method of free-space optical interconnection developed for data-processing applications like parallel optical computing, neural-network computing, and switching in optical communication networks. In method, multiple optical connections between multiple sources of light in one array and multiple photodetectors in another array made via computer-generated holograms in electrically addressed spatial light modulators (ESLMs). Offers potential advantages of massive parallelism, high space-bandwidth product, high time-bandwidth product, low power consumption, low cross talk, and low time skew. Also offers advantage of programmability with flexibility of reconfiguration, including variation of strengths of optical connections in real time.

  8. HZETRN: Description of a free-space ion and nucleon transport and shielding computer program

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Badavi, Francis F.; Cucinotta, Francis A.; Shinn, Judy L.; Badhwar, Gautam D.; Silberberg, R.; Tsao, C. H.; Townsend, Lawrence W.; Tripathi, Ram K.

    1995-01-01

    The high-charge-and energy (HZE) transport computer program HZETRN is developed to address the problems of free-space radiation transport and shielding. The HZETRN program is intended specifically for the design engineer who is interested in obtaining fast and accurate dosimetric information for the design and construction of space modules and devices. The program is based on a one-dimensional space-marching formulation of the Boltzmann transport equation with a straight-ahead approximation. The effect of the long-range Coulomb force and electron interaction is treated as a continuous slowing-down process. Atomic (electronic) stopping power coefficients with energies above a few A MeV are calculated by using Bethe's theory including Bragg's rule, Ziegler's shell corrections, and effective charge. Nuclear absorption cross sections are obtained from fits to quantum calculations and total cross sections are obtained with a Ramsauer formalism. Nuclear fragmentation cross sections are calculated with a semiempirical abrasion-ablation fragmentation model. The relation of the final computer code to the Boltzmann equation is discussed in the context of simplifying assumptions. A detailed description of the flow of the computer code, input requirements, sample output, and compatibility requirements for non-VAX platforms are provided.

  9. Space Spurred Computer Graphics

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.

  10. Rapid exploration of configuration space with diffusion-map-directed molecular dynamics.

    PubMed

    Zheng, Wenwei; Rohrdanz, Mary A; Clementi, Cecilia

    2013-10-24

    The gap between the time scale of interesting behavior in macromolecular systems and that which our computational resources can afford often limits molecular dynamics (MD) from understanding experimental results and predicting what is inaccessible in experiments. In this paper, we introduce a new sampling scheme, named diffusion-map-directed MD (DM-d-MD), to rapidly explore molecular configuration space. The method uses a diffusion map to guide MD on the fly. DM-d-MD can be combined with other methods to reconstruct the equilibrium free energy, and here, we used umbrella sampling as an example. We present results from two systems: alanine dipeptide and alanine-12. In both systems, we gain tremendous speedup with respect to standard MD both in exploring the configuration space and reconstructing the equilibrium distribution. In particular, we obtain 3 orders of magnitude of speedup over standard MD in the exploration of the configurational space of alanine-12 at 300 K with DM-d-MD. The method is reaction coordinate free and minimally dependent on a priori knowledge of the system. We expect wide applications of DM-d-MD to other macromolecular systems in which equilibrium sampling is not affordable by standard MD.

  11. Rapid Exploration of Configuration Space with Diffusion Map-directed-Molecular Dynamics

    PubMed Central

    Zheng, Wenwei; Rohrdanz, Mary A.; Clementi, Cecilia

    2013-01-01

    The gap between the timescale of interesting behavior in macromolecular systems and that which our computational resources can afford oftentimes limits Molecular Dynamics (MD) from understanding experimental results and predicting what is inaccessible in experiments. In this paper, we introduce a new sampling scheme, named Diffusion Map-directed-MD (DM-d-MD), to rapidly explore molecular configuration space. The method uses diffusion map to guide MD on the fly. DM-d-MD can be combined with other methods to reconstruct the equilibrium free energy, and here we used umbrella sampling as an example. We present results from two systems: alanine dipeptide and alanine-12. In both systems we gain tremendous speedup with respect to standard MD both in exploring the configuration space and reconstructing the equilibrium distribution. In particular, we obtain 3 orders of magnitude of speedup over standard MD in the exploration of the configurational space of alanine-12 at 300K with DM-d-MD. The method is reaction coordinate free and minimally dependent on a priori knowledge of the system. We expect wide applications of DM-d-MD to other macromolecular systems in which equilibrium sampling is not affordable by standard MD. PMID:23865517

  12. A k-Vector Approach to Sampling, Interpolation, and Approximation

    NASA Astrophysics Data System (ADS)

    Mortari, Daniele; Rogers, Jonathan

    2013-12-01

    The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.

  13. Space Station Simulation Computer System (SCS) study for NASA/MSFC. Volume 2: Baseline architecture report

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.

  14. Space Station Simulation Computer System (SCS) study for NASA/MSFC. Volume 1: Baseline architecture report

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of the computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.

  15. Comparison Analysis of Recognition Algorithms of Forest-Cover Objects on Hyperspectral Air-Borne and Space-Borne Images

    NASA Astrophysics Data System (ADS)

    Kozoderov, V. V.; Kondranin, T. V.; Dmitriev, E. V.

    2017-12-01

    The basic model for the recognition of natural and anthropogenic objects using their spectral and textural features is described in the problem of hyperspectral air-borne and space-borne imagery processing. The model is based on improvements of the Bayesian classifier that is a computational procedure of statistical decision making in machine-learning methods of pattern recognition. The principal component method is implemented to decompose the hyperspectral measurements on the basis of empirical orthogonal functions. Application examples are shown of various modifications of the Bayesian classifier and Support Vector Machine method. Examples are provided of comparing these classifiers and a metrical classifier that operates on finding the minimal Euclidean distance between different points and sets in the multidimensional feature space. A comparison is also carried out with the " K-weighted neighbors" method that is close to the nonparametric Bayesian classifier.

  16. Subspace K-means clustering.

    PubMed

    Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla

    2013-12-01

    To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).

  17. The Effect of Computer Assisted and Computer Based Teaching Methods on Computer Course Success and Computer Using Attitudes of Students

    ERIC Educational Resources Information Center

    Tosun, Nilgün; Suçsuz, Nursen; Yigit, Birol

    2006-01-01

    The purpose of this research was to investigate the effects of the computer-assisted and computer-based instructional methods on students achievement at computer classes and on their attitudes towards using computers. The study, which was completed in 6 weeks, were carried out with 94 sophomores studying in formal education program of Primary…

  18. Recursive Techniques for Computing Gluon Scattering in Anti-de-Sitter Space

    NASA Astrophysics Data System (ADS)

    Shyaka, Claude; Kharel, Savan

    2016-03-01

    The anti-de Sitter/conformal field theory correspondence is a relationship between two kinds of physical theories. On one side of the duality are special type of quantum (conformal) field theories known as the Yang-Mills theory. These quantum field theories are known to be equivalent to theories of gravity in Anti-de Sitter (AdS) space. The physical observables in the theory are the correlation functions that live in the boundary of AdS space. In general correlation functions are computed using configuration space and the expressions are extremely complicated. Using momentum basis and recursive techniques developed by Raju, we extend tree level correlation functions for four and five-point correlation functions in Yang-Mills theory in Anti-de Sitter space. In addition, we show that for certain external helicity, the correlation functions have simple analytic structure. Finally, we discuss how one can generalize these results to n-point functions. Hendrix college odyssey Grant.

  19. Packet spacing : an enabling mechanism for delivering multimedia content in computational grids /

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, A. C.; Feng, W. C.; Belford, Geneva G.

    2001-01-01

    Streaming multimedia with UDP has become increasingly popular over distributed systems like the Internet. Scientific applications that stream multimedia include remote computational steering of visualization data and video-on-demand teleconferencing over the Access Grid. However, UDP does not possess a self-regulating, congestion-control mechanism; and most best-efort traflc is served by congestion-controlled TCF! Consequently, UDP steals bandwidth from TCP such that TCP$ows starve for network resources. With the volume of Internet traffic continuing to increase, the perpetuation of UDP-based streaming will cause the Internet to collapse as it did in the mid-1980's due to the use of non-congestion-controlled TCP. To address thismore » problem, we introduce the counterintuitive notion of inter-packet spacing with control feedback to enable UDP-based applications to perform well in the next-generation Internet and computational grids. When compared with traditional UDP-based streaming, we illustrate that our approach can reduce packet loss over SO% without adversely afecting delivered throughput. Keywords: network protocol, multimedia, packet spacing, streaming, TCI: UDlq rate-adjusting congestion control, computational grid, Access Grid.« less

  20. Body MR Imaging: Artifacts, k-Space, and Solutions

    PubMed Central

    Seethamraju, Ravi T.; Patel, Pritesh; Hahn, Peter F.; Kirsch, John E.; Guimaraes, Alexander R.

    2015-01-01

    Body magnetic resonance (MR) imaging is challenging because of the complex interaction of multiple factors, including motion arising from respiration and bowel peristalsis, susceptibility effects secondary to bowel gas, and the need to cover a large field of view. The combination of these factors makes body MR imaging more prone to artifacts, compared with imaging of other anatomic regions. Understanding the basic MR physics underlying artifacts is crucial to recognizing the trade-offs involved in mitigating artifacts and improving image quality. Artifacts can be classified into three main groups: (a) artifacts related to magnetic field imperfections, including the static magnetic field, the radiofrequency (RF) field, and gradient fields; (b) artifacts related to motion; and (c) artifacts arising from methods used to sample the MR signal. Static magnetic field homogeneity is essential for many MR techniques, such as fat saturation and balanced steady-state free precession. Susceptibility effects become more pronounced at higher field strengths and can be ameliorated by using spin-echo sequences when possible, increasing the receiver bandwidth, and aligning the phase-encoding gradient with the strongest susceptibility gradients, among other strategies. Nonuniformities in the RF transmit field, including dielectric effects, can be minimized by applying dielectric pads or imaging at lower field strength. Motion artifacts can be overcome through respiratory synchronization, alternative k-space sampling schemes, and parallel imaging. Aliasing and truncation artifacts derive from limitations in digital sampling of the MR signal and can be rectified by adjusting the sampling parameters. Understanding the causes of artifacts and their possible solutions will enable practitioners of body MR imaging to meet the challenges of novel pulse sequence design, parallel imaging, and increasing field strength. ©RSNA, 2015 PMID:26207581

  1. S-band low noise amplifier and 40 kW high power amplifier subsystems of Japanese Deep Space Earth Station

    NASA Astrophysics Data System (ADS)

    Honma, K.; Handa, K.; Akinaga, W.; Doi, M.; Matsuzaki, O.

    This paper describes the design and the performance of the S-band low noise amplifier and the S-band high power amplifier that have been developed for the Usuda Deep Space Station of the Institute of Space and Astronautical Science (ISAS), Japan. The S-band low noise amplifier consists of a helium gas-cooled parametric amplifier followed by three-stage FET amplifiers and has a noise temperature of 8 K. The high power amplifier is composed of two 28 kW klystrons, capable of transmitting 40 kW continuously when two klystrons are combined. Both subsystems are operating quite satisfactorily in the tracking of Sakigake and Suisei, the Japanese interplanetary probes for Halley's comet exploration, launched by ISAS in 1985.

  2. Effects of three drying methods of post space dentin bonding used in a direct resin composite core build-up method.

    PubMed

    Iwashita, Taichi; Mine, Atsushi; Matsumoto, Mariko; Nakatani, Hayaki; Higashi, Mami; Kawaguchi-Uemura, Asuka; Kabetani, Tomoshige; Tajiri, Yuko; Imai, Dai; Hagino, Ryosuke; Miura, Jiro; Minamino, Takuya; Yatani, Hirofumi

    2018-06-14

    The purpose of this study was to evaluate drying methods for post space dentin bonding in a direct resin composite core build-up method. Experiment 1: Four root canal plastic models, having diameters of 1.0 or 1.8mm and parallel or tapered shapes, were prepared. After drying each post space using three drying methods (air drying, paper-point drying, or ethanol drying, which involves filling the space with 99.5 vol% ethanol followed by air drying), the residual liquid in the models was weighed. Experiment 2: Thirty endodontically treated single-root teeth were dried using the above-described drying methods and filled with dual-cure resin composite. The bonded specimens were sectioned into square beams of approximately 1mm 2 for microtensile bond strength (μTBS) testing. Nine teeth were observed through transmission electron microscopy (TEM) and micro computed tomography (μCT). The weight of residual liquid and μTBS were analyzed using Scheffé multiple comparison. Experiment 1: The results of air drying were significantly different from those of paper-point drying (p<0.001) and ethanol drying (p<0.001), and no significant difference was observed between paper-point drying and ethanol drying. Experiment 2: The μTBS significantly decreased in the order of ethanol drying, paper-point drying, and air drying (air drying/ethanol drying: p<0.001, air drying/paper-point drying: p=0.048, ethanol drying/paper-point drying: p=0.032). TEM and μCT observation revealed a sufficient dentin/adhesive interface in the ethanol drying group. Ethanol drying was found to be more effective for post space dentin bonding, as compared with air drying and paper-point drying. Copyright © 2018 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  3. Space radiator simulation manual for computer code

    NASA Technical Reports Server (NTRS)

    Black, W. Z.; Wulff, W.

    1972-01-01

    A computer program that simulates the performance of a space radiator is presented. The program basically consists of a rigorous analysis which analyzes a symmetrical fin panel and an approximate analysis that predicts system characteristics for cases of non-symmetrical operation. The rigorous analysis accounts for both transient and steady state performance including aerodynamic and radiant heating of the radiator system. The approximate analysis considers only steady state operation with no aerodynamic heating. A description of the radiator system and instructions to the user for program operation is included. The input required for the execution of all program options is described. Several examples of program output are contained in this section. Sample output includes the radiator performance during ascent, reentry and orbit.

  4. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., 'practice' using a computer keyboard, part of equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  5. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., look with curiosity at the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  6. Audubon Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Audubon Elementary School, Merritt Island, Fla., eagerly unwrap computer equipment donated by Kennedy Space Center. Audubon is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year- long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  7. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., eagerly tear into the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year- long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  8. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., excitedly tear into the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  9. Microfluidic Leaching of Soil Minerals: Release of K+ from K Feldspar

    PubMed Central

    Ciceri, Davide; Allanore, Antoine

    2015-01-01

    The rate of K+ leaching from soil minerals such as K-feldspar is believed to be too slow to provide agronomic benefit. Currently, theories and methods available to interpret kinetics of mineral processes in soil fail to consider its microfluidic nature. In this study, we measure the leaching rate of K+ ions from a K-feldspar-bearing rock (syenite) in a microfluidic environment, and demonstrate that at the spatial and temporal scales experienced by crop roots, K+ is available at a faster rate than that measured with conventional apparatuses. We present a device to investigate kinetics of mineral leaching at an unprecedented simultaneous resolution of space (~101-102 μm), time (~101-102 min) and fluid volume (~100-101 mL). Results obtained from such a device challenge the notion that silicate minerals cannot be used as alternative fertilizers for tropical soils. PMID:26485160

  10. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  11. Enhanced Sampling Methods for the Computation of Conformational Kinetics in Macromolecules

    NASA Astrophysics Data System (ADS)

    Grazioli, Gianmarc

    Calculating the kinetics of conformational changes in macromolecules, such as proteins and nucleic acids, is still very much an open problem in theoretical chemistry and computational biophysics. If it were feasible to run large sets of molecular dynamics trajectories that begin in one configuration and terminate when reaching another configuration of interest, calculating kinetics from molecular dynamics simulations would be simple, but in practice, configuration spaces encompassing all possible configurations for even the simplest of macromolecules are far too vast for such a brute force approach. In fact, many problems related to searches of configuration spaces, such as protein structure prediction, are considered to be NP-hard. Two approaches to addressing this problem are to either develop methods for enhanced sampling of trajectories that confine the search to productive trajectories without loss of temporal information, or coarse-grained methodologies that recast the problem in reduced spaces that can be exhaustively searched. This thesis will begin with a description of work carried out in the vein of the second approach, where a Smoluchowski diffusion equation model was developed that accurately reproduces the rate vs. force relationship observed in the mechano-catalytic disulphide bond cleavage observed in thioredoxin-catalyzed reduction of disulphide bonds. Next, three different novel enhanced sampling methods developed in the vein of the first approach will be described, which can be employed either separately or in conjunction with each other to autonomously define a set of energetically relevant subspaces in configuration space, accelerate trajectories between the interfaces dividing the subspaces while preserving the distribution of unassisted transition times between subspaces, and approximate time correlation functions from the kinetic data collected from the transitions between interfaces.

  12. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  13. Water demand forecasting: review of soft computing methods.

    PubMed

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  14. Computational State Space Models for Activity and Intention Recognition. A Feasibility Study

    PubMed Central

    Krüger, Frank; Nyolt, Martin; Yordanova, Kristina; Hein, Albert; Kirste, Thomas

    2014-01-01

    Background Computational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity. Methods A typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance. Results The symbolic domain model was found to have more than states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure. Conclusions Our results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance

  15. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  16. Computationally mapping sequence space to understand evolutionary protein engineering.

    PubMed

    Armstrong, Kathryn A; Tidor, Bruce

    2008-01-01

    Evolutionary protein engineering has been dramatically successful, producing a wide variety of new proteins with altered stability, binding affinity, and enzymatic activity. However, the success of such procedures is often unreliable, and the impact of the choice of protein, engineering goal, and evolutionary procedure is not well understood. We have created a framework for understanding aspects of the protein engineering process by computationally mapping regions of feasible sequence space for three small proteins using structure-based design protocols. We then tested the ability of different evolutionary search strategies to explore these sequence spaces. The results point to a non-intuitive relationship between the error-prone PCR mutation rate and the number of rounds of replication. The evolutionary relationships among feasible sequences reveal hub-like sequences that serve as particularly fruitful starting sequences for evolutionary search. Moreover, genetic recombination procedures were examined, and tradeoffs relating sequence diversity and search efficiency were identified. This framework allows us to consider the impact of protein structure on the allowed sequence space and therefore on the challenges that each protein presents to error-prone PCR and genetic recombination procedures.

  17. Analyzing costs of space debris mitigation methods

    NASA Astrophysics Data System (ADS)

    Wiedemann, C.; Krag, H.; Bendisch, J.; Sdunnus, H.

    The steadily increasing number of space objects poses a considerable hazard to all kinds of spacecraft. To reduce the risks to future space missions different debris mitigation measures and spacecraft protection techniques have been investigated during the last years. However, the economic efficiency has not been considered yet in this context. This economical background is not always clear to satellite operators and the space industry. Current studies have the objective to evaluate the mission costs due to space debris in a business as usual (no mitigation) scenario compared to the missions costs considering debris mitigation. The aim i an estimation of thes time until the investment in debris mitigation will lead to an effective reduction of mission costs. This paper presents the results of investigations on the key problems of cost estimation for spacecraft and the influence of debris mitigation and shielding on cost. The shielding of a satellite can be an effective method to protect the spacecraft against debris impact. Mitigation strategies like the reduction of orbital lifetime and de- or re-orbit of non-operational satellites are methods to control the space debris environment. These methods result in an increase of costs. In a first step the overall costs of different types of unmanned satellites are analyzed. The key problem is, that it is not possible to provide a simple cost model that can be applied to all types of satellites. Unmanned spacecraft differ very much in mission, complexity of design, payload and operational lifetime. It is important to classify relevant cost parameters and investigate their influence on the respective mission. The theory of empirical cost estimation and existing cost models are discussed. A selected cost model is simplified and generalized for an application on all operational satellites. In a next step the influence of space debris on cost is treated, if the implementation of mitigation strategies is considered.

  18. k(+)-buffer: An Efficient, Memory-Friendly and Dynamic k-buffer Framework.

    PubMed

    Vasilakis, Andreas-Alexandros; Papaioannou, Georgios; Fudos, Ioannis

    2015-06-01

    Depth-sorted fragment determination is fundamental for a host of image-based techniques which simulates complex rendering effects. It is also a challenging task in terms of time and space required when rasterizing scenes with high depth complexity. When low graphics memory requirements are of utmost importance, k-buffer can objectively be considered as the most preferred framework which advantageously ensures the correct depth order on a subset of all generated fragments. Although various alternatives have been introduced to partially or completely alleviate the noticeable quality artifacts produced by the initial k-buffer algorithm in the expense of memory increase or performance downgrade, appropriate tools to automatically and dynamically compute the most suitable value of k are still missing. To this end, we introduce k(+)-buffer, a fast framework that accurately simulates the behavior of k-buffer in a single rendering pass. Two memory-bounded data structures: (i) the max-array and (ii) the max-heap are developed on the GPU to concurrently maintain the k-foremost fragments per pixel by exploring pixel synchronization and fragment culling. Memory-friendly strategies are further introduced to dynamically (a) lessen the wasteful memory allocation of individual pixels with low depth complexity frequencies, (b) minimize the allocated size of k-buffer according to different application goals and hardware limitations via a straightforward depth histogram analysis and (c) manage local GPU cache with a fixed-memory depth-sorting mechanism. Finally, an extensive experimental evaluation is provided demonstrating the advantages of our work over all prior k-buffer variants in terms of memory usage, performance cost and image quality.

  19. High-efficiency 3 W/40 K single-stage pulse tube cryocooler for space application

    NASA Astrophysics Data System (ADS)

    Zhang, Ankuo; Wu, Yinong; Liu, Shaoshuai; Liu, Biqiang; Yang, Baoyu

    2018-03-01

    Temperature is an extremely important parameter for space-borne infrared detectors. To develop a quantum-well infrared photodetector (QWIP), a high-efficiency Stirling-type pulse tube cryocooler (PTC) has been designed, manufactured and experimentally investigated for providing a large cooling power at 40 K cold temperature. Simulated and experimental studies were carried out to analyse the effects of low temperature on different energy flows and losses, and the performance of the PTC was improved by optimizing components and parameters such as regenerator and operating frequency. A no-load lowest temperature of 26.2 K could be reached at a frequency of 51 Hz, and the PTC could efficiently offer cooling power of 3 W at 40 K cold temperature when the input power was 225 W. The efficiency relative to the Carnot efficiency was approximately 8.4%.

  20. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.