NASA Technical Reports Server (NTRS)
Roskam, J.
1983-01-01
The transmission loss characteristics of panels using the acoustic intensity technique is presented. The theoretical formulation, installation of hardware, modifications to the test facility, and development of computer programs and test procedures are described. A listing of all the programs is also provided. The initial test results indicate that the acoustic intensity technique is easily adapted to measure transmission loss characteristics of panels. Use of this method will give average transmission loss values. The fixtures developed to position the microphones along the grid points are very useful in plotting the intensity maps of vibrating panels.
NASA Technical Reports Server (NTRS)
Dorband, John E.
1987-01-01
Generating graphics to faithfully represent information can be a computationally intensive task. A way of using the Massively Parallel Processor to generate images by ray tracing is presented. This technique uses sort computation, a method of performing generalized routing interspersed with computation on a single-instruction-multiple-data (SIMD) computer.
NASA Astrophysics Data System (ADS)
Tong, Minh Q.; Hasan, M. Monirul; Gregory, Patrick D.; Shah, Jasmine; Park, B. Hyle; Hirota, Koji; Liu, Junze; Choi, Andy; Low, Karen; Nam, Jin
2017-02-01
We demonstrate a computationally-efficient optical coherence elastography (OCE) method based on fringe washout. By introducing ultrasound in alternating depth profile, we can obtain information on the mechanical properties of a sample within acquisition of a single image. This can be achieved by simply comparing the intensity in adjacent depth profiles in order to quantify the degree of fringe washout. Phantom agar samples with various densities were measured and quantified by our OCE technique, the correlation to Young's modulus measurement by atomic force micrscopy (AFM) were observed. Knee cartilage samples of monoiodo acetate-induced arthiritis (MIA) rat models were utilized to replicate cartilage damages where our proposed OCE technique along with intensity and birefringence analyses and AFM measurements were applied. The results indicate that our OCE technique shows a correlation to the techniques as polarization-sensitive OCT, AFM Young's modulus measurements and histology were promising. Our OCE is applicable to any of existing OCT systems and demonstrated to be computationally-efficient.
New coding technique for computer generated holograms.
NASA Technical Reports Server (NTRS)
Haskell, R. E.; Culver, B. C.
1972-01-01
A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.
Digital image processing for information extraction.
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1973-01-01
The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.
Computational approaches to computational aero-acoustics
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
The various techniques by which the goal of computational aeroacoustics (the calculation and noise prediction of a fluctuating fluid flow) may be achieved are reviewed. The governing equations for compressible fluid flow are presented. The direct numerical simulation approach is shown to be computationally intensive for high Reynolds number viscous flows. Therefore, other approaches, such as the acoustic analogy, vortex models and various perturbation techniques that aim to break the analysis into a viscous part and an acoustic part are presented. The choice of the approach is shown to be problem dependent.
Resampling: A Marriage of Computers and Statistics. ERIC/TM Digest.
ERIC Educational Resources Information Center
Rudner, Lawrence M.; Shafer, Mary Morello
Advances in computer technology are making it possible for educational researchers to use simpler statistical methods to address a wide range of questions with smaller data sets and fewer, and less restrictive, assumptions. This digest introduces computationally intensive statistics, collectively called resampling techniques. Resampling is a…
NASA Technical Reports Server (NTRS)
Rockey, D. E.
1979-01-01
A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.
Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.
Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K
1995-07-01
The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.
NASA Astrophysics Data System (ADS)
Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.
2007-03-01
Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.
The application of computational chemistry to lignin
Thomas Elder; Laura Berstis; Nele Sophie Zwirchmayr; Gregg T. Beckham; Michael F. Crowley
2017-01-01
Computational chemical methods have become an important technique in the examination of the structure and reactivity of lignin. The calculations can be based either on classical or quantum mechanics, with concomitant differences in computational intensity and size restrictions. The current paper will concentrate on results developed from the latter type of calculations...
The M-Integral for Computing Stress Intensity Factors in Generally Anisotropic Materials
NASA Technical Reports Server (NTRS)
Warzynek, P. A.; Carter, B. J.; Banks-Sills, L.
2005-01-01
The objective of this project is to develop and demonstrate a capability for computing stress intensity factors in generally anisotropic materials. These objectives have been met. The primary deliverable of this project is this report and the information it contains. In addition, we have delivered the source code for a subroutine that will compute stress intensity factors for anisotropic materials encoded in both the C and Python programming languages and made available a version of the FRANC3D program that incorporates this subroutine. Single crystal super alloys are commonly used for components in the hot sections of contemporary jet and rocket engines. Because these components have a uniform atomic lattice orientation throughout, they exhibit anisotropic material behavior. This means that stress intensity solutions developed for isotropic materials are not appropriate for the analysis of crack growth in these materials. Until now, a general numerical technique did not exist for computing stress intensity factors of cracks in anisotropic materials and cubic materials in particular. Such a capability was developed during the project and is described and demonstrated herein.
Finite element techniques applied to cracks interacting with selected singularities
NASA Technical Reports Server (NTRS)
Conway, J. C.
1975-01-01
The finite-element method for computing the extensional stress-intensity factor for cracks approaching selected singularities of varied geometry is described. Stress-intensity factors are generated using both displacement and J-integral techniques, and numerical results are compared to those obtained experimentally in a photoelastic investigation. The selected singularities considered are a colinear crack, a circular penetration, and a notched circular penetration. Results indicate that singularities greatly influence the crack-tip stress-intensity factor as the crack approaches the singularity. In addition, the degree of influence can be regulated by varying the overall geometry of the singularity. Local changes in singularity geometry have little effect on the stress-intensity factor for the cases investigated.
Leibovici, Vera; Magora, Florella; Cohen, Sarale; Ingber, Arieh
2009-01-01
BACKGROUND: Virtual reality immersion (VRI), an advanced computer-generated technique, decreased subjective reports of pain in experimental and procedural medical therapies. Furthermore, VRI significantly reduced pain-related brain activity as measured by functional magnetic resonance imaging. Resemblance between anatomical and neuroendocrine pathways of pain and pruritus may prove VRI to be a suitable adjunct for basic and clinical studies of the complex aspects of pruritus. OBJECTIVES: To compare effects of VRI with audiovisual distraction (AVD) techniques for attenuation of pruritus in patients with atopic dermatitis and psoriasis vulgaris. METHODS: Twenty-four patients suffering from chronic pruritus – 16 due to atopic dermatitis and eight due to psoriasis vulgaris – were randomly assigned to play an interactive computer game using a special visor or a computer screen. Pruritus intensity was self-rated before, during and 10 min after exposure using a visual analogue scale ranging from 0 to 10. The interviewer rated observed scratching on a three-point scale during each distraction program. RESULTS: Student’s t tests were significant for reduction of pruritus intensity before and during VRI and AVD (P=0.0002 and P=0.01, respectively) and were significant only between ratings before and after VRI (P=0.017). Scratching was mostly absent or mild during both programs. CONCLUSIONS: VRI and AVD techniques demonstrated the ability to diminish itching sensations temporarily. Further studies on the immediate and late effects of interactive computer distraction techniques to interrupt itching episodes will open potential paths for future pruritus research. PMID:19714267
A maximum entropy reconstruction technique for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.
2013-04-01
This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.
NASA Technical Reports Server (NTRS)
Flanagan, P. M.; Atherton, W. J.
1985-01-01
A robotic system to automate the detection, location, and quantification of gear noise using acoustic intensity measurement techniques has been successfully developed. Major system components fabricated under this grant include an instrumentation robot arm, a robot digital control unit and system software. A commercial, desktop computer, spectrum analyzer and two microphone probe complete the equipment required for the Robotic Acoustic Intensity Measurement System (RAIMS). Large-scale acoustic studies of gear noise in helicopter transmissions cannot be performed accurately and reliably using presently available instrumentation and techniques. Operator safety is a major concern in certain gear noise studies due to the operating environment. The man-hours needed to document a noise field in situ is another shortcoming of present techniques. RAIMS was designed to reduce the labor and hazard in collecting data and to improve the accuracy and repeatability of characterizing the acoustic field by automating the measurement process. Using RAIMS a system operator can remotely control the instrumentation robot to scan surface areas and volumes generating acoustic intensity information using the two microphone technique. Acoustic intensity studies requiring hours of scan time can be performed automatically without operator assistance. During a scan sequence, the acoustic intensity probe is positioned by the robot and acoustic intensity data is collected, processed, and stored.
NASA Technical Reports Server (NTRS)
Jones, Robert E.; Kramarchuk, Ihor; Williams, Wallace D.; Pouch, John J.; Gilbert, Percy
1989-01-01
Computer-controlled thermal-wave microscope developed to investigate III-V compound semiconductor devices and materials. Is nondestructive technique providing information on subsurface thermal features of solid samples. Furthermore, because this is subsurface technique, three-dimensional imaging also possible. Microscope uses intensity-modulated electron beam of modified scanning electron microscope to generate thermal waves in sample. Acoustic waves generated by thermal waves received by transducer and processed in computer to form images displayed on video display of microscope or recorded on magnetic disk.
1989-08-01
thermal pulse loadings. The work couples a Green’s function integration technique for transient thermal stresses with the well-known influence ... function approach for calculating stress intensity factors. A total of seven most commonly used crack models were investigated in this study. A computer
Lung Cancer: Posttreatment Imaging: Radiation Therapy and Imaging Findings.
Benveniste, Marcelo F; Welsh, James; Viswanathan, Chitra; Shroff, Girish S; Betancourt Cuellar, Sonia L; Carter, Brett W; Marom, Edith M
2018-05-01
In this review, we discuss the different radiation delivery techniques available to treat non-small cell lung cancer, typical radiologic manifestations of conventional radiotherapy, and different patterns of lung injury and temporal evolution of the newer radiotherapy techniques. More sophisticated techniques include intensity-modulated radiotherapy, stereotactic body radiotherapy, proton therapy, and respiration-correlated computed tomography or 4-dimensional computed tomography for radiotherapy planning. Knowledge of the radiation treatment plan and technique, the completion date of radiotherapy, and the temporal evolution of radiation-induced lung injury is important to identify expected manifestations of radiation-induced lung injury and differentiate them from tumor recurrence or infection. Published by Elsevier Inc.
Information granules in image histogram analysis.
Wieclawek, Wojciech
2018-04-01
A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.
Impedance computations and beam-based measurements: A problem of discrepancy
NASA Astrophysics Data System (ADS)
Smaluk, Victor
2018-04-01
High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictions based on the computed impedance budgets show a significant discrepancy. Three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.
Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner
2017-11-01
Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.
Kernel analysis in TeV gamma-ray selection
NASA Astrophysics Data System (ADS)
Moriarty, P.; Samuelson, F. W.
2000-06-01
We discuss the use of kernel analysis as a technique for selecting gamma-ray candidates in Atmospheric Cherenkov astronomy. The method is applied to observations of the Crab Nebula and Markarian 501 recorded with the Whipple 10 m Atmospheric Cherenkov imaging system, and the results are compared with the standard Supercuts analysis. Since kernel analysis is computationally intensive, we examine approaches to reducing the computational load. Extension of the technique to estimate the energy of the gamma-ray primary is considered. .
NASA Astrophysics Data System (ADS)
Li, J.; Zhang, T.; Huang, Q.; Liu, Q.
2014-12-01
Today's climate datasets are featured with large volume, high degree of spatiotemporal complexity and evolving fast overtime. As visualizing large volume distributed climate datasets is computationally intensive, traditional desktop based visualization applications fail to handle the computational intensity. Recently, scientists have developed remote visualization techniques to address the computational issue. Remote visualization techniques usually leverage server-side parallel computing capabilities to perform visualization tasks and deliver visualization results to clients through network. In this research, we aim to build a remote parallel visualization platform for visualizing and analyzing massive climate data. Our visualization platform was built based on Paraview, which is one of the most popular open source remote visualization and analysis applications. To further enhance the scalability and stability of the platform, we have employed cloud computing techniques to support the deployment of the platform. In this platform, all climate datasets are regular grid data which are stored in NetCDF format. Three types of data access methods are supported in the platform: accessing remote datasets provided by OpenDAP servers, accessing datasets hosted on the web visualization server and accessing local datasets. Despite different data access methods, all visualization tasks are completed at the server side to reduce the workload of clients. As a proof of concept, we have implemented a set of scientific visualization methods to show the feasibility of the platform. Preliminary results indicate that the framework can address the computation limitation of desktop based visualization applications.
Applications of Phase-Based Motion Processing
NASA Technical Reports Server (NTRS)
Branch, Nicholas A.; Stewart, Eric C.
2018-01-01
Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.
Measuring and Estimating Normalized Contrast in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2013-01-01
Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.
Evaluation of normalization methods for cDNA microarray data by k-NN classification
Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J
2005-01-01
Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics. PMID:16045803
Evaluation of normalization methods for cDNA microarray data by k-NN classification.
Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J
2005-07-26
Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics.
High-resolution computer-aided moire
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1991-12-01
This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.
Impedance computations and beam-based measurements: A problem of discrepancy
Smaluk, Victor
2018-04-21
High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less
Impedance computations and beam-based measurements: A problem of discrepancy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smaluk, Victor
High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less
OCT angiography by absolute intensity difference applied to normal and diseased human retinas
Ruminski, Daniel; Sikorski, Bartosz L.; Bukowska, Danuta; Szkulmowski, Maciej; Krawiec, Krzysztof; Malukiewicz, Grazyna; Bieganowski, Lech; Wojtkowski, Maciej
2015-01-01
We compare four optical coherence tomography techniques for noninvasive visualization of microcapillary network in the human retina and murine cortex. We perform phantom studies to investigate contrast-to-noise ratio for angiographic images obtained with each of the algorithm. We show that the computationally simplest absolute intensity difference angiographic OCT algorithm that bases only on two cross-sectional intensity images may be successfully used in clinical study of healthy eyes and eyes with diabetic maculopathy and branch retinal vein occlusion. PMID:26309740
Fan noise prediction assessment
NASA Technical Reports Server (NTRS)
Bent, Paul H.
1995-01-01
This report is an evaluation of two techniques for predicting the fan noise radiation from engine nacelles. The first is a relatively computational intensive finite element technique. The code is named ARC, an abbreviation of Acoustic Radiation Code, and was developed by Eversman. This is actually a suite of software that first generates a grid around the nacelle, then solves for the potential flowfield, and finally solves the acoustic radiation problem. The second approach is an analytical technique requiring minimal computational effort. This is termed the cutoff ratio technique and was developed by Rice. Details of the duct geometry, such as the hub-to-tip ratio and Mach number of the flow in the duct, and modal content of the duct noise are required for proper prediction.
Digital signal conditioning for flight test instrumentation
NASA Technical Reports Server (NTRS)
Bever, Glenn A.
1991-01-01
An introduction to digital measurement processes on aircraft is provided. Flight test instrumentation systems are rapidly evolving from analog-intensive to digital intensive systems, including the use of onboard digital computers. The topics include measurements that are digital in origin, as well as sampling, encoding, transmitting, and storing data. Particular emphasis is placed on modern avionic data bus architectures and what to be aware of when extracting data from them. Examples of data extraction techniques are given. Tradeoffs between digital logic families, trends in digital development, and design testing techniques are discussed. An introduction to digital filtering is also covered.
NASA Technical Reports Server (NTRS)
Dahlback, Arne; Stamnes, Knut
1991-01-01
Accurate computation of atmospheric photodissociation and heating rates is needed in photochemical models. These quantities are proportional to the mean intensity of the solar radiation penetrating to various levels in the atmosphere. For large solar zenith angles a solution of the radiative transfer equation valid for a spherical atmosphere is required in order to obtain accurate values of the mean intensity. Such a solution based on a perturbation technique combined with the discrete ordinate method is presented. Mean intensity calculations are carried out for various solar zenith angles. These results are compared with calculations from a plane parallel radiative transfer model in order to assess the importance of using correct geometry around sunrise and sunset. This comparison shows, in agreement with previous investigations, that for solar zenith angles less than 90 deg adequate solutions are obtained for plane parallel geometry as long as spherical geometry is used to compute the direct beam attenuation; but for solar zenith angles greater than 90 deg this pseudospherical plane parallel approximation overstimates the mean intensity.
NASA Astrophysics Data System (ADS)
Lamberti, Fabrizio; Sanna, Andrea; Paravati, Gianluca; Belluccini, Luca
2014-02-01
Tracking pedestrian targets in forward-looking infrared video sequences is a crucial component of a growing number of applications. At the same time, it is particularly challenging, since image resolution and signal-to-noise ratio are generally very low, while the nonrigidity of the human body produces highly variable target shapes. Moreover, motion can be quite chaotic with frequent target-to-target and target-to-scene occlusions. Hence, the trend is to design ever more sophisticated techniques, able to ensure rather accurate tracking results at the cost of a generally higher complexity. However, many of such techniques might not be suitable for real-time tracking in limited-resource environments. This work presents a technique that extends an extremely computationally efficient tracking method based on target intensity variation and template matching originally designed for targets with a marked and stable hot spot by adapting it to deal with much more complex thermal signatures and by removing the native dependency on configuration choices. Experimental tests demonstrated that, by working on multiple hot spots, the designed technique is able to achieve the robustness of other common approaches by limiting drifts and preserving the low-computational footprint of the reference method.
NASA Technical Reports Server (NTRS)
Roberti, Dino; Ludwig, Reinhold; Looft, Fred J.
1988-01-01
A 3-D computer model of a piston radiator with lenses for focusing and defocusing is presented. To achieve high-resolution imaging, the frequency of the transmitted and received ultrasound must be as high as 10 MHz. Current ultrasonic transducers produce an extremely narrow beam at these high frequencies and thus are not appropriate for imaging schemes such as synthetic-aperture focus techniques (SAFT). Consequently, a numerical analysis program has been developed to determine field intensity patterns that are radiated from ultrasonic transducers with lenses. Lens shapes are described and the field intensities are numerically predicted and compared with experimental results.
Modern Computational Techniques for the HMMER Sequence Analysis
2013-01-01
This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional software approach and innovated hardware acceleration technologies. PMID:25937944
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Reconstruction of SAXS Profiles from Protein Structures
Putnam, Daniel K.; Lowe, Edward W.
2013-01-01
Small angle X-ray scattering (SAXS) is used for low resolution structural characterization of proteins often in combination with other experimental techniques. After briefly reviewing the theory of SAXS we discuss computational methods based on 1) the Debye equation and 2) Spherical Harmonics to compute intensity profiles from a particular macromolecular structure. Further, we review how these formulas are parameterized for solvent density and hydration shell adjustment. Finally we introduce our solution to compute SAXS profiles utilizing GPU acceleration. PMID:24688746
Custom blending of lamp phosphors
NASA Technical Reports Server (NTRS)
Klemm, R. E.
1978-01-01
Spectral output of fluorescent lamps can be precisely adjusted by using computer-assisted analysis for custom blending lamp phosphors. With technique, spectrum of main bank of lamps is measured and stored in computer memory along with emission characteristics of commonly available phosphors. Computer then calculates ratio of green and blue intensities for each phosphor according to manufacturer's specifications and plots them as coordinates on graph. Same ratios are calculated for measured spectrum. Once proper mix is determined, it is applied as coating to fluorescent tubing.
Simultaneous mapping of the unsteady flow fields by Particle Displacement Velocimetry (PDV)
NASA Technical Reports Server (NTRS)
Huang, Thomas T.; Fry, David J.; Liu, Han-Lieh; Katz, Joseph; Fu, Thomas C.
1992-01-01
Current experimental and computational techniques must be improved in order to advance the prediction capability of the longitudinal vortical flows shed by underwater vehicles. The generation, development, and breakdown mechanisms of the shed vortices at high Reynolds numbers are not fully understood. The ability to measure hull separated vortices associated with vehicle maneuvering does not exist at present. The existing point-by-point measurement techniques can only capture approximately the large 'mean' eddies but fail to meet the dynamics of small vortices during the initial stage of generation. A new technique, which offers a previously unavailable capability to measure the unsteady cross-flow distribution in the plane of the laser light sheet, is called Particle Displacement Velocimetry (PDV). PDV consists of illuminating a thin section of the flowfield with a pulsed laser. The water is seeded with microscopic, neutrally buoyant particles containing imbedded fluorescing dye which responds with intense spontaneous fluorescence with the illuminated section. The seeded particles in the vortical flow structure shed by the underwater vehicle are illuminated by the pulse laser and the corresponding particle traces are recorded in a single photographic frame. Two distinct approaches were utilized for determining the velocity distribution from the particle traces. The first method is based on matching the traces of the same particle and measuring the distance between them. The direction of the flow can be identified by keeping one of the pulses longer than the other. The second method is based on selecting a small window within the image and finding the mean shift of all the particles within that region. The computation of the auto-correlation of the intensity distribution within the selected sample window is used to determine the mean displacement of particles. The direction of the flow is identified by varying the intensity of the laser light between pulses. Considerable computational resources are required to compute the auto-correction of the intensity distribution. Parallel processing will be employed to speed up the data reduction. A few examples of measured unsteady vortical flow structures shed by the underwater vehicles will be presented.
An image morphing technique based on optimal mass preserving mapping.
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2007-06-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.
An Image Morphing Technique Based on Optimal Mass Preserving Mapping
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2013-01-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
Noguchi, Teruo; Tanaka, Atsushi; Kawasaki, Tomohiro; Goto, Yoichi; Morita, Yoshiaki; Asaumi, Yasuhide; Nakao, Kazuhiro; Fujiwara, Reiko; Nishimura, Kunihiro; Miyamoto, Yoshihiro; Ishihara, Masaharu; Ogawa, Hisao; Koga, Nobuhiko; Narula, Jagat; Yasuda, Satoshi
2015-07-21
Coronary high-intensity plaques detected by noncontrast T1-weighted imaging may represent plaque instability. High-intensity plaques can be quantitatively assessed by a plaque-to-myocardium signal-intensity ratio (PMR). This pilot, hypothesis-generating study sought to investigate whether intensive statin therapy would lower PMR. Prospective serial noncontrast T1-weighted magnetic resonance imaging and computed tomography angiography were performed in 48 patients with coronary artery disease at baseline and after 12 months of intensive pitavastatin treatment with a target low-density lipoprotein cholesterol level <80 mg/dl. The control group consisted of coronary artery disease patients not treated with statins that were matched by propensity scoring (n = 48). The primary endpoint was the 12-month change in PMR. Changes in computed tomography angiography parameters and high-sensitivity C-reactive protein levels were analyzed. In the statin group, 12 months of statin therapy significantly improved low-density lipoprotein cholesterol levels (125 to 70 mg/dl; p < 0.001), PMR (1.38 to 1.11, an 18.9% reduction; p < 0.001), low-attenuation plaque volume, and the percentage of total atheroma volume on computed tomography. In the control group, the PMR increased significantly (from 1.22 to 1.49, a 19.2% increase; p < 0.001). Changes in PMR were correlated with changes in low-density lipoprotein cholesterol (r = 0.533; p < 0.001), high-sensitivity C-reactive protein (r = 0.347; p < 0.001), percentage of atheroma volume (r = 0.477; p < 0.001), and percentage of low-attenuation plaque volume (r = 0.416; p < 0.001). Statin treatment significantly reduced the PMR of high-intensity plaques. Noncontrast T1-weighted magnetic resonance imaging could become a useful technique for repeated quantitative assessment of plaque composition. (Attempts at Plaque Vulnerability Quantification with Magnetic Resonance Imaging Using Noncontrast T1-weighted Technique [AQUAMARINE]; UMIN000003567). Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Reducing software mass through behavior control. [of planetary roving robots
NASA Technical Reports Server (NTRS)
Miller, David P.
1992-01-01
Attention is given to the tradeoff between communication and computation as regards a planetary rover (both these subsystems are very power-intensive, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover). Software techniques that can be used to reduce the requirements on both communciation and computation, allowing the overall robot mass to be greatly reduced, are discussed. Novel approaches to autonomous control, called behavior control, employ an entirely different approach, and for many tasks will yield a similar or superior level of autonomy to traditional control techniques, while greatly reducing the computational demand. Traditional systems have several expensive processes that operate serially, while behavior techniques employ robot capabilities that run in parallel. Traditional systems make extensive world models, while behavior control systems use minimal world models or none at all.
Modeling 3-D objects with planar surfaces for prediction of electromagnetic scattering
NASA Technical Reports Server (NTRS)
Koch, M. B.; Beck, F. B.; Cockrell, C. R.
1992-01-01
Electromagnetic scattering analysis of objects at resonance is difficult because low frequency techniques are slow and computer intensive, and high frequency techniques may not be reliable. A new technique for predicting the electromagnetic backscatter from electrically conducting objects at resonance is studied. This technique is based on modeling three dimensional objects as a combination of flat plates where some of the plates are blocking the scattering from others. A cube is analyzed as a simple example. The preliminary results compare well with the Geometrical Theory of Diffraction and with measured data.
NASA Astrophysics Data System (ADS)
Haider, Shahid A.; Kazemzadeh, Farnoud; Wong, Alexander
2017-03-01
An ideal laser is a useful tool for the analysis of biological systems. In particular, the polarization property of lasers can allow for the concentration of important organic molecules in the human body, such as proteins, amino acids, lipids, and carbohydrates, to be estimated. However, lasers do not always work as intended and there can be effects such as mode hopping and thermal drift that can cause time-varying intensity fluctuations. The causes of these effects can be from the surrounding environment, where either an unstable current source is used or the temperature of the surrounding environment is not temporally stable. This intensity fluctuation can cause bias and error in typical organic molecule concentration estimation techniques. In a low-resource setting where cost must be limited and where environmental factors, like unregulated power supplies and temperature, cannot be controlled, the hardware required to correct for these intensity fluctuations can be prohibitive. We propose a method for computational laser intensity stabilisation that uses Bayesian state estimation to correct for the time-varying intensity fluctuations from electrical and thermal instabilities without the use of additional hardware. This method will allow for consistent intensities across all polarization measurements for accurate estimates of organic molecule concentrations.
Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.
Kervrann, C; Legland, D; Pardini, L
2004-06-01
Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.
A vectorized Lanczos eigensolver for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1990-01-01
The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.
Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars
2014-12-06
Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.
The Human Brain Project and neuromorphic computing
Calimera, Andrea; Macii, Enrico; Poncino, Massimo
Summary Understanding how the brain manages billions of processing units connected via kilometers of fibers and trillions of synapses, while consuming a few tens of Watts could provide the key to a completely new category of hardware (neuromorphic computing systems). In order to achieve this, a paradigm shift for computing as a whole is needed, which will see it moving away from current “bit precise” computing models and towards new techniques that exploit the stochastic behavior of simple, reliable, very fast, low-power computing devices embedded in intensely recursive architectures. In this paper we summarize how these objectives will be pursued in the Human Brain Project. PMID:24139655
A fast multi-resolution approach to tomographic PIV
NASA Astrophysics Data System (ADS)
Discetti, Stefano; Astarita, Tommaso
2012-03-01
Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.
Sharma, Avnish Kumar; Patidar, Rajesh Kumar; Daiya, Deepak; Joshi, Anandverdhan; Naik, Prasad Anant; Gupta, Parshotam Dass
2013-04-20
In this paper, a new method for alignment of the pinhole of a spatial filter (SF) has been proposed and demonstrated experimentally. The effect of the misalignment of the pinhole on the laser beam profiles has been calculated for circular and elliptical Gaussian laser beams. Theoretical computation has been carried out to illustrate the effect of an intensity mask, placed before the focusing lens of the SF, on the spatial beam profile after the pinhole of the SF. It is shown, both theoretically and experimentally, that a simple intensity mask, consisting of a black dot, can be used to visually align the pinhole with a high accuracy of 5% of the pinhole diameter. The accuracy may be further improved using a computer-based image processing algorithm. Finally, the proposed technique has been demonstrated to align a vacuum SF of a compact 40 J Nd:phosphate glass laser system.
NASA Technical Reports Server (NTRS)
Curlis, J. D.; Frost, V. S.; Dellwig, L. F.
1986-01-01
Computer-enhancement techniques applied to the SIR-A data from the Lisbon Valley area in the northern portion of the Paradox basin increased the value of the imagery in the development of geologically useful maps. The enhancement techniques include filtering to remove image speckle from the SIR-A data and combining these data with Landsat multispectral scanner data. A method well-suited for the combination of the data sets utilized a three-dimensional domain defined by intensity-hue-saturation (IHS) coordinates. Such a system allows the Landsat data to modulate image intensity, while the SIR-A data control image hue and saturation. Whereas the addition of Landsat data to the SIR-A image by means of a pixel-by-pixel ratio accentuated textural variations within the image, the addition of color to the combined images enabled isolation of areas in which gray-tone contrast was minimal. This isolation resulted in a more precise definition of stratigraphic units.
Space-filling designs for computer experiments: A review
Joseph, V. Roshan
2016-01-29
Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less
Space-filling designs for computer experiments: A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, V. Roshan
Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
NASA Technical Reports Server (NTRS)
Claassen, J. P.; Fung, A. K.
1977-01-01
The radar equation for incoherent scenes is derived and scattering coefficients are introduced in a systematic way to account for the complete interaction between the incident wave and the random scene. Intensity (power) and correlation techniques similar to that for coherent targets are proposed to measure all the scattering parameters. The sensitivity of the intensity technique to various practical realizations of the antenna polarization requirements is evaluated by means of computer simulated measurements, conducted with a scattering characteristic similar to that of the sea. It was shown that for scenes satisfying reciprocity one must admit three new cross-correlation scattering coefficients in addition to the commonly measured autocorrelation coefficients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De La Pierre, Marco, E-mail: cedric.carteret@univ-lorraine.fr, E-mail: marco.delapierre@unito.it; Maschio, Lorenzo; Orlando, Roberto
Powder and single crystal Raman spectra of the two most common phases of calcium carbonate are calculated with ab initio techniques (using a “hybrid” functional and a Gaussian-type basis set) and measured both at 80 K and room temperature. Frequencies of the Raman modes are in very good agreement between calculations and experiments: the mean absolute deviation at 80 K is 4 and 8 cm{sup −1} for calcite and aragonite, respectively. As regards intensities, the agreement is in general good, although the computed values overestimate the measured ones in many cases. The combined analysis permits to identify almost all themore » fundamental experimental Raman peaks of the two compounds, with the exception of either modes with zero computed intensity or modes overlapping with more intense peaks. Additional peaks have been identified in both calcite and aragonite, which have been assigned to {sup 18}O satellite modes or overtones. The agreement between the computed and measured spectra is quite satisfactory; in particular, simulation permits to clearly distinguish between calcite and aragonite in the case of powder spectra, and among different polarization directions of each compound in the case of single crystal spectra.« less
NASA Astrophysics Data System (ADS)
Polyansky, Oleg L.; Zobov, Nikolai F.; Mizus, Irina I.; Kyuberis, Aleksandra A.; Lodi, Lorenzo; Tennyson, Jonathan
2018-05-01
Monitoring ozone concentrations in the Earth's atmosphere using spectroscopic methods is a major activity which undertaken both from the ground and from space. However there are long-running issues of consistency between measurements made at infrared (IR) and ultraviolet (UV) wavelengths. In addition, key O3 IR bands at 10 μm, 5 μm and 3 μm also yield results which differ by a few percent when used for retrievals. These problems stem from the underlying laboratory measurements of the line intensities. Here we use quantum chemical techniques, first principles electronic structure and variational nuclear-motion calculations, to address this problem. A new high-accuracy ab initio dipole moment surface (DMS) is computed. Several spectroscopically-determined potential energy surfaces (PESs) are constructed by fitting to empirical energy levels in the region below 7000 cm-1 starting from an ab initio PES. Nuclear motion calculations using these new surfaces allow the unambiguous determination of the intensities of 10 μm band transitions, and the computation of the intensities of 10 μm and 5 μm bands within their experimental error. A decrease in intensities within the 3 μm is predicted which appears consistent with atmospheric retrievals. The PES and DMS form a suitable starting point both for the computation of comprehensive ozone line lists and for future calculations of electronic transition intensities.
NASA Technical Reports Server (NTRS)
Brown, G. S.; Curry, W. J.
1977-01-01
The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.
A Cyber-ITS Framework for Massive Traffic Data Analysis Using Cyber Infrastructure
Fontaine, Michael D.
2013-01-01
Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing. PMID:23766690
A Cyber-ITS framework for massive traffic data analysis using cyber infrastructure.
Xia, Yingjie; Hu, Jia; Fontaine, Michael D
2013-01-01
Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing.
Betowski, Don; Bevington, Charles; Allison, Thomas C
2016-01-19
Halogenated chemical substances are used in a broad array of applications, and new chemical substances are continually being developed and introduced into commerce. While recent research has considerably increased our understanding of the global warming potentials (GWPs) of multiple individual chemical substances, this research inevitably lags behind the development of new chemical substances. There are currently over 200 substances known to have high GWP. Evaluation of schemes to estimate radiative efficiency (RE) based on computational chemistry are useful where no measured IR spectrum is available. This study assesses the reliability of values of RE calculated using computational chemistry techniques for 235 chemical substances against the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models, and reasonable agreement with reported values is found. Significant improvement is obtained through scaling of both vibrational frequencies and intensities. The effect of varying the computational method and basis set used to calculate the frequency data is discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed RE values.
Spectral mapping of soil organic matter
NASA Technical Reports Server (NTRS)
Kristof, S. J.; Baumgardner, M. F.; Johannsen, C. J.
1974-01-01
Multispectral remote sensing data were examined for use in the mapping of soil organic matter content. Computer-implemented pattern recognition techniques were used to analyze data collected in May 1969 and May 1970 by an airborne multispectral scanner over a 40-km flightline. Two fields within the flightline were selected for intensive study. Approximately 400 surface soil samples from these fields were obtained for organic matter analysis. The analytical data were used as training sets for computer-implemented analysis of the spectral data. It was found that within the geographical limitations included in this study, multispectral data and automatic data processing techniques could be used very effectively to delineate and map surface soils areas containing different levels of soil organic matter.
Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G
2018-05-25
Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation
Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan
2014-01-01
Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432
NASA Astrophysics Data System (ADS)
Zaripov, D. I.; Renfu, Li
2018-05-01
The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.
Extension of filament propagation in water with Bessel-Gaussian beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaya, G.; Sayrac, M.; Boran, Y.
We experimentally studied intense femtosecond pulse filamentation and propagation in water for Bessel-Gaussian beams with different numbers of radial modal lobes. The transverse modes of the incident Bessel-Gaussian beam were created from a Gaussian beam of a Ti:sapphire laser system by using computer generated hologram techniques. We found that filament propagation length increased with increasing number of lobes under the conditions of the same peak intensity, pulse duration, and the size of the central peak of the incident beam, suggesting that the radial modal lobes may serve as an energy reservoir for the filaments formed by the central intensity peak.
Computation of transmitted and received B1 fields in magnetic resonance imaging.
Milles, Julien; Zhu, Yue Min; Chen, Nan-Kuei; Panych, Lawrence P; Gimenez, Gérard; Guttmann, Charles R G
2006-05-01
Computation of B1 fields is a key issue for determination and correction of intensity nonuniformity in magnetic resonance images. This paper presents a new method for computing transmitted and received B1 fields. Our method combines a modified MRI acquisition protocol and an estimation technique based on the Levenberg-Marquardt algorithm and spatial filtering. It enables accurate estimation of transmitted and received B1 fields for both homogeneous and heterogeneous objects. The method is validated using numerical simulations and experimental data from phantom and human scans. The experimental results are in agreement with theoretical expectations.
GPU and APU computations of Finite Time Lyapunov Exponent fields
NASA Astrophysics Data System (ADS)
Conti, Christian; Rossinelli, Diego; Koumoutsakos, Petros
2012-03-01
We present GPU and APU accelerated computations of Finite-Time Lyapunov Exponent (FTLE) fields. The calculation of FTLEs is a computationally intensive process, as in order to obtain the sharp ridges associated with the Lagrangian Coherent Structures an extensive resampling of the flow field is required. The computational performance of this resampling is limited by the memory bandwidth of the underlying computer architecture. The present technique harnesses data-parallel execution of many-core architectures and relies on fast and accurate evaluations of moment conserving functions for the mesh to particle interpolations. We demonstrate how the computation of FTLEs can be efficiently performed on a GPU and on an APU through OpenCL and we report over one order of magnitude improvements over multi-threaded executions in FTLE computations of bluff body flows.
Statistical normalization techniques for magnetic resonance imaging.
Shinohara, Russell T; Sweeney, Elizabeth M; Goldsmith, Jeff; Shiee, Navid; Mateen, Farrah J; Calabresi, Peter A; Jarso, Samson; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M
2014-01-01
While computed tomography and other imaging techniques are measured in absolute units with physical meaning, magnetic resonance images are expressed in arbitrary units that are difficult to interpret and differ between study visits and subjects. Much work in the image processing literature on intensity normalization has focused on histogram matching and other histogram mapping techniques, with little emphasis on normalizing images to have biologically interpretable units. Furthermore, there are no formalized principles or goals for the crucial comparability of image intensities within and across subjects. To address this, we propose a set of criteria necessary for the normalization of images. We further propose simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria. We compare the performance of different normalization methods in thousands of images of patients with Alzheimer's disease, hundreds of patients with multiple sclerosis, and hundreds of healthy subjects obtained in several different studies at dozens of imaging centers.
Digital computer processing of peach orchard multispectral aerial photography
NASA Technical Reports Server (NTRS)
Atkinson, R. J.
1976-01-01
Several methods of analysis using digital computers applicable to digitized multispectral aerial photography, are described, with particular application to peach orchard test sites. This effort was stimulated by the recent premature death of peach trees in the Southeastern United States. The techniques discussed are: (1) correction of intensity variations by digital filtering, (2) automatic detection and enumeration of trees in five size categories, (3) determination of unhealthy foliage by infrared reflectances, and (4) four band multispectral classification into healthy and declining categories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy, Jean-Baptiste; Falk, Alexander T.; Auberdiac, Pierre
Introduction: For patients with cervical cancer, intensity-modulated radiation therapy (IMRT) improves target coverage and allows dose escalation while reducing the radiation dose to organs at risk (OARs). In this study, we compared dosimetric parameters among 3-dimensional conformal radiotherapy (3D-CRT), “step-and-shoot” IMRT, and volumetric intensity-modulated arc radiotherapy (VMAT) in a series of patients with cervical cancer receiving definitive radiotherapy. Computed tomography (CT) scans of 10 patients with histologically proven cervical cancer treated with definitive radiation therapy (RT) from December 2008 to March 2010 at our department were selected for this study. The gross tumor volume (GTV) and clinical target volume (CTV)more » were delineated following the guidelines of the Gyn IMRT consortium that included cervix, uterus, parametrial tissues, and the pelvic nodes including presacral. The median age was 57 years (range: 30 to 85 years). All 10 patients had squamous cell carcinoma with Federation of Gynecology and Obstetrics (FIGO) stage IB-IIIB. All patients were treated by VMAT. OAR doses were significantly reduced for plans with intensity-modulated technique compared with 3D-CRT except for the dose to the vagina. Between the 2 intensity-modulated techniques, significant difference was observed for the mean dose to the small intestine, to the benefit of VMAT (p < 0.001). There was no improvement in terms of OARs sparing for VMAT although there was a tendency for a slightly decreased average dose to the rectum: − 0.65 Gy but not significant (p = 0.07). The intensity modulation techniques have many advantages in terms of quality indexes, and particularly OAR sparing, compared with 3D-CRT. Following the ongoing technologic developments in modern radiotherapy, it is essential to evaluate the intensity-modulated techniques on prospective studies of a larger scale.« less
NASA Astrophysics Data System (ADS)
Shaari, M. S.; Akramin, M. R. M.; Ariffin, A. K.; Abdullah, S.; Kikuchi, M.
2018-02-01
The paper is presenting the fatigue crack growth (FCG) behavior of semi-elliptical surface cracks for API X65 gas pipeline using S-version FEM. A method known as global-local overlay technique was used in this study to predict the fatigue behavior that involve of two separate meshes each specifically for global (geometry) and local (crack). The pre-post program was used to model the global geometry (coarser mesh) known as FAST including the material and boundary conditions. Hence, the local crack (finer mesh) will be defined the exact location and the mesh control accordingly. The local mesh was overlaid along with the global before the numerical computation taken place to solve the engineering problem. The stress intensity factors were computed using the virtual crack closure-integral method (VCCM). The most important results is the behavior of the fatigue crack growth, which contains the crack depth (a), crack length (c) and stress intensity factors (SIF). The correlation between the fatigue crack growth and the SIF shows a good growth for the crack depth (a) and dissimilar for the crack length (c) where stunned behavior was resulted. The S-version FEM will benefiting the user due to the overlay technique where it will shorten the computation process.
Dense depth maps from correspondences derived from perceived motion
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2017-01-01
Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.
NASA Astrophysics Data System (ADS)
Civale, John; Ter Haar, Gail; Rivens, Ian; Bamber, Jeff
2005-09-01
Currently, the intensity to be used in our clinical HIFU treatments is calculated from the acoustic path lengths in different tissues measured on diagnostic ultrasound images of the patient in the treatment position, and published values of ultrasound attenuation coefficients. This yields an approximate value for the acoustic power at the transducer required to give a stipulated focal intensity in situ. Estimation methods for the actual acoustic attenuation have been investigated in large parts of the tissue path overlying the target volume from the backscattered ultrasound signal for each patient (backscatter attenuation estimation: BAE). Several methods have been investigated. The backscattered echo information acquired from an Acuson scanner has been used to compute the diffraction-corrected attenuation coefficient at each frequency using two methods: a substitution method and an inverse diffraction filtering process. A homogeneous sponge phantom was used to validate the techniques. The use of BAE to determine the correct HIFU exposure parameters for lesioning has been tested in ex vivo liver. HIFU lesions created with a 1.7-MHz therapy transducer have been studied using a semiautomated image processing technique. The reproducibility of lesion size for given in situ intensities determined using BAE and empirical techniques has been compared.
Determination of stress intensity factors for interface cracks under mixed-mode loading
NASA Technical Reports Server (NTRS)
Naik, Rajiv A.; Crews, John H., Jr.
1992-01-01
A simple technique was developed using conventional finite element analysis to determine stress intensity factors, K1 and K2, for interface cracks under mixed-mode loading. This technique involves the calculation of crack tip stresses using non-singular finite elements. These stresses are then combined and used in a linear regression procedure to calculate K1 and K2. The technique was demonstrated by calculating three different bimaterial combinations. For the normal loading case, the K's were within 2.6 percent of an exact solution. The normalized K's under shear loading were shown to be related to the normalized K's under normal loading. Based on these relations, a simple equation was derived for calculating K1 and K2 for mixed-mode loading from knowledge of the K's under normal loading. The equation was verified by computing the K's for a mixed-mode case with equal and normal shear loading. The correlation between exact and finite element solutions is within 3.7 percent. This study provides a simple procedure to compute K2/K1 ratio which has been used to characterize the stress state at the crack tip for various combinations of materials and loadings. Tests conducted over a range of K2/K1 ratios could be used to fully characterize interface fracture toughness.
Tropical Cyclone Intensity Estimation Using Deep Convolutional Neural Networks
NASA Technical Reports Server (NTRS)
Maskey, Manil; Cecil, Dan; Ramachandran, Rahul; Miller, Jeffrey J.
2018-01-01
Estimating tropical cyclone intensity by just using satellite image is a challenging problem. With successful application of the Dvorak technique for more than 30 years along with some modifications and improvements, it is still used worldwide for tropical cyclone intensity estimation. A number of semi-automated techniques have been derived using the original Dvorak technique. However, these techniques suffer from subjective bias as evident from the most recent estimations on October 10, 2017 at 1500 UTC for Tropical Storm Ophelia: The Dvorak intensity estimates ranged from T2.3/33 kt (Tropical Cyclone Number 2.3/33 knots) from UW-CIMSS (University of Wisconsin-Madison - Cooperative Institute for Meteorological Satellite Studies) to T3.0/45 kt from TAFB (the National Hurricane Center's Tropical Analysis and Forecast Branch) to T4.0/65 kt from SAB (NOAA/NESDIS Satellite Analysis Branch). In this particular case, two human experts at TAFB and SAB differed by 20 knots in their Dvorak analyses, and the automated version at the University of Wisconsin was 12 knots lower than either of them. The National Hurricane Center (NHC) estimates about 10-20 percent uncertainty in its post analysis when only satellite based estimates are available. The success of the Dvorak technique proves that spatial patterns in infrared (IR) imagery strongly relate to tropical cyclone intensity. This study aims to utilize deep learning, the current state of the art in pattern recognition and image recognition, to address the need for an automated and objective tropical cyclone intensity estimation. Deep learning is a multi-layer neural network consisting of several layers of simple computational units. It learns discriminative features without relying on a human expert to identify which features are important. Our study mainly focuses on convolutional neural network (CNN), a deep learning algorithm, to develop an objective tropical cyclone intensity estimation. CNN is a supervised learning algorithm requiring a large number of training data. Since the archives of intensity data and tropical cyclone centric satellite images is openly available for use, the training data is easily created by combining the two. Results, case studies, prototypes, and advantages of this approach will be discussed.
2010-03-01
is to develop a novel clinical useful delivered-dose verification protocol for modern prostate VMAT using Electronic Portal Imaging Device (EPID...technique. A number of important milestones have been accomplished, which include (i) calibrated CBCT HU vs. electron density curve; (ii...prostate VMAT using Electronic Portal Imaging Device (EPID) and onboard Cone beam Computed Tomography (CBCT). The specific aims of this project
NASA Astrophysics Data System (ADS)
Park, Yong Min; Kim, Byeong Hee; Seo, Young Ho
2016-06-01
This paper presents a selective aluminum anodization technique for the fabrication of microstructures covered by nanoscale dome structures. It is possible to fabricate bulging microstructures, utilizing the different growth rates of anodic aluminum oxide in non-uniform electric fields, because the growth rate of anodic aluminum oxide depends on the intensity of electric field, or current density. After anodizing under a non-uniform electric field, bulging microstructures covered by nanostructures were fabricated by removing the residual aluminum layer. The non-uniform electric field induced by insulative micropatterns was estimated by computational simulations and verified experimentally. Utilizing computational simulations, the intensity profile of the electric field was calculated according to the ratio of height and width of the insulative micropatterns. To compare computational simulation results and experimental results, insulative micropatterns were fabricated using SU-8 photoresist. The results verified that the shape of the bottom topology of anodic alumina was strongly dependent on the intensity profile of the applied electric field, or current density. The one-step fabrication of nanostructure-covered microstructures can be applied to various fields, such as nano-biochip and nano-optics, owing to its simplicity and cost effectiveness.
Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messer, Bronson; Sewell, Christopher; Heitmann, Katrin
2015-01-01
Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less
[Parallel virtual reality visualization of extreme large medical datasets].
Tang, Min
2010-04-01
On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.
A Biomechanical Modeling Guided CBCT Estimation Technique
Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing
2017-01-01
Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks. PMID:27831866
Fast Learning for Immersive Engagement in Energy Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M
The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less
Applications of Deep Learning and Reinforcement Learning to Biological Data.
Mahmud, Mufti; Kaiser, Mohammed Shamim; Hussain, Amir; Vassanelli, Stefano
2018-06-01
Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)-machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
NASA Astrophysics Data System (ADS)
Liansheng, Sui; Yin, Cheng; Bing, Li; Ailing, Tian; Krishna Asundi, Anand
2018-07-01
A novel computational ghost imaging scheme based on specially designed phase-only masks, which can be efficiently applied to encrypt an original image into a series of measured intensities, is proposed in this paper. First, a Hadamard matrix with a certain order is generated, where the number of elements in each row is equal to the size of the original image to be encrypted. Each row of the matrix is rearranged into the corresponding 2D pattern. Then, each pattern is encoded into the phase-only masks by making use of an iterative phase retrieval algorithm. These specially designed masks can be wholly or partially used in the process of computational ghost imaging to reconstruct the original information with high quality. When a significantly small number of phase-only masks are used to record the measured intensities in a single-pixel bucket detector, the information can be authenticated without clear visualization by calculating the nonlinear correlation map between the original image and its reconstruction. The results illustrate the feasibility and effectiveness of the proposed computational ghost imaging mechanism, which will provide an effective alternative for enriching the related research on the computational ghost imaging technique.
Fast MPEG-CDVS Encoder With GPU-CPU Hybrid Computing
NASA Astrophysics Data System (ADS)
Duan, Ling-Yu; Sun, Wei; Zhang, Xinfeng; Wang, Shiqi; Chen, Jie; Yin, Jianxiong; See, Simon; Huang, Tiejun; Kot, Alex C.; Gao, Wen
2018-05-01
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group (MPEG) has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of GPU. We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation and the memory access are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU to resolve the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which has harmoniously leveraged the advantages of GPU platforms, and yielded significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
Three-dimensional digital mapping of the optic nerve head cupping in glaucoma
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Ramirez, Manuel; Morales, Jose
1992-08-01
Visualization of the optic nerve head cupping is clinically achieved by stereoscopic viewing of a fundus image pair of the suspected eye. A novel algorithm for three-dimensional digital surface representation of the optic nerve head, using fusion of stereo depth map with a linearly stretched intensity image of a stereo fundus image pair, is presented. Prior to depth map acquisition, a number of preprocessing tasks including feature extraction, registration by cepstral analysis, and correction for intensity variations are performed. The depth map is obtained by using a coarse to fine strategy for obtaining disparities between corresponding areas. The required matching techniques to obtain the translational differences in every step, uses cepstral analysis and correlation-like scanning technique in the spatial domain for the finest details. The quantitative and precise representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful information than just qualitative stereoscopic viewing of the fundus as one of the diagnostic criteria for diagnosis of glaucoma.
Point Target Detection in IR Image Sequences using Spatio-Temporal Hypotheses Testing
1999-02-01
incorporate temporal as well as spatial infor- mation, they are often referred to as \\ track before detect " algorithms. The standard approach was to pose the...6, 3]. A drawback of these track - before - detect techniques is that they are very computationally intensive since the entire 3-D space must be ltered
Salient regions detection using convolutional neural networks and color volume
NASA Astrophysics Data System (ADS)
Liu, Guang-Hai; Hou, Yingkun
2018-03-01
Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.
NASA Astrophysics Data System (ADS)
Glushkov, A. V.; Gurskaya, M. Yu; Ignatenko, A. V.; Smirnov, A. V.; Serga, I. N.; Svinarenko, A. A.; Ternovsky, E. V.
2017-10-01
The consistent relativistic energy approach to the finite Fermi-systems (atoms and nuclei) in a strong realistic laser field is presented and applied to computing the multiphoton resonances parameters in some atoms and nuclei. The approach is based on the Gell-Mann and Low S-matrix formalism, multiphoton resonance lines moments technique and advanced Ivanov-Ivanova algorithm of calculating the Green’s function of the Dirac equation. The data for multiphoton resonance width and shift for the Cs atom and the 57Fe nucleus in dependence upon the laser intensity are listed.
Vibrational Spectroscopy and Astrobiology
NASA Technical Reports Server (NTRS)
Chaban, Galina M.; Kwak, D. (Technical Monitor)
2001-01-01
Role of vibrational spectroscopy in solving problems related to astrobiology will be discussed. Vibrational (infrared) spectroscopy is a very sensitive tool for identifying molecules. Theoretical approach used in this work is based on direct computation of anharmonic vibrational frequencies and intensities from electronic structure codes. One of the applications of this computational technique is possible identification of biological building blocks (amino acids, small peptides, DNA bases) in the interstellar medium (ISM). Identifying small biological molecules in the ISM is very important from the point of view of origin of life. Hybrid (quantum mechanics/molecular mechanics) theoretical techniques will be discussed that may allow to obtain accurate vibrational spectra of biomolecular building blocks and to create a database of spectroscopic signatures that can assist observations of these molecules in space. Another application of the direct computational spectroscopy technique is to help to design and analyze experimental observations of ice surfaces of one of the Jupiter's moons, Europa, that possibly contains hydrated salts. The presence of hydrated salts on the surface can be an indication of a subsurface ocean and the possible existence of life forms inhabiting such an ocean.
Novelli, M D; Barreto, E; Matos, D; Saad, S S; Borra, R C
1997-01-01
The authors present the experimental results of the computerized quantifying of tissular structures involved in the reparative process of colonic anastomosis performed by manual suture and biofragmentable ring. The quantified variables in this study were: oedema fluid, myofiber tissue, blood vessel and cellular nuclei. An image processing software developed at Laboratório de Informática Dedicado à Odontologia (LIDO) was utilized to quantifying the pathognomonic alterations in the inflammatory process in colonic anastomosis performed in 14 dogs. The results were compared to those obtained through traditional way diagnosis by two pathologists in view of counterproof measures. The criteria for these diagnoses were defined in levels represented by absent, light, moderate and intensive which were compared to analysis performed by the computer. There was significant statistical difference between two techniques: the biofragmentable ring technique exhibited low oedema fluid, organized myofiber tissue and higher number of alongated cellular nuclei in relation to manual suture technique. The analysis of histometric variables through computational image processing was considered efficient and powerful to quantify the main tissular inflammatory and reparative changing.
Erasing the Milky Way: new cleaning technique applied to GBT intensity mapping data
NASA Astrophysics Data System (ADS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masui, K. W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.; Yadav, J.
2017-02-01
We present the first application of a new foreground removal pipeline to the current leading H I intensity mapping data set, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h-field data of the GBT observations previously presented in Mausui et al. and Switzer et al., covering about 41 deg2 at 0.6 < z < 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point-source contamination using an independent component analysis technique (FASTICA), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that FASTICA is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps are dominated by instrumental noise on small scales which FASTICA, as a conservative subtraction technique of non-Gaussian signals, cannot mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the singular value decomposition (SVD) method, and confirm that foreground subtraction with FASTICA is robust against 21 cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and FASTICA are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping data sets.
A strategy for selecting data mining techniques in metabolomics.
Banimustafa, Ahmed Hmaidan; Hardy, Nigel W
2012-01-01
There is a general agreement that the development of metabolomics depends not only on advances in chemical analysis techniques but also on advances in computing and data analysis methods. Metabolomics data usually requires intensive pre-processing, analysis, and mining procedures. Selecting and applying such procedures requires attention to issues including justification, traceability, and reproducibility. We describe a strategy for selecting data mining techniques which takes into consideration the goals of data mining techniques on the one hand, and the goals of metabolomics investigations and the nature of the data on the other. The strategy aims to ensure the validity and soundness of results and promote the achievement of the investigation goals.
Advanced Software V&V for Civil Aviation and Autonomy
NASA Technical Reports Server (NTRS)
Brat, Guillaume P.
2017-01-01
With the advances in high-computing platform (e.g., advanced graphical processing units or multi-core processors), computationally-intensive software techniques such as the ones used in artificial intelligence or formal methods have provided us with an opportunity to further increase safety in the aviation industry. Some of these techniques have facilitated building safety at design time, like in aircraft engines or software verification and validation, and others can introduce safety benefits during operations as long as we adapt our processes. In this talk, I will present how NASA is taking advantage of these new software techniques to build in safety at design time through advanced software verification and validation, which can be applied earlier and earlier in the design life cycle and thus help also reduce the cost of aviation assurance. I will then show how run-time techniques (such as runtime assurance or data analytics) offer us a chance to catch even more complex problems, even in the face of changing and unpredictable environments. These new techniques will be extremely useful as our aviation systems become more complex and more autonomous.
Analysis of computer images in the presence of metals
NASA Astrophysics Data System (ADS)
Buzmakov, Alexey; Ingacheva, Anastasia; Prun, Victor; Nikolaev, Dmitry; Chukalina, Marina; Ferrero, Claudio; Asadchikov, Victor
2018-04-01
Artifacts caused by intensely absorbing inclusions are encountered in computed tomography via polychromatic scanning and may obscure or simulate pathologies in medical applications. To improve the quality of reconstruction if high-Z inclusions in presence, previously we proposed and tested with synthetic data an iterative technique with soft penalty mimicking linear inequalities on the photon-starved rays. This note reports a test at the tomographic laboratory set-up at the Institute of Crystallography FSRC "Crystallography and Photonics" RAS in which tomographic scans were successfully made of temporary tooth without inclusion and with Pb inclusion.
Mars Rover imaging systems and directional filtering
NASA Technical Reports Server (NTRS)
Wang, Paul P.
1989-01-01
Computer literature searches were carried out at Duke University and NASA Langley Research Center. The purpose is to enhance personal knowledge based on the technical problems of pattern recognition and image understanding which must be solved for the Mars Rover and Sample Return Mission. Intensive study effort of a large collection of relevant literature resulted in a compilation of all important documents in one place. Furthermore, the documents are being classified into: Mars Rover; computer vision (theory); imaging systems; pattern recognition methodologies; and other smart techniques (AI, neural networks, fuzzy logic, etc).
A CT and MRI scan to MCNP input conversion program.
Van Riper, Kenneth A
2005-01-01
We describe a new program to read a sequence of tomographic scans and prepare the geometry and material sections of an MCNP input file. Image processing techniques include contrast controls and mapping of grey scales to colour. The user interface provides several tools with which the user can associate a range of image intensities to an MCNP material. Materials are loaded from a library. A separate material assignment can be made to a pixel intensity or range of intensities when that intensity dominates the image boundaries; this material is assigned to all pixels with that intensity contiguous with the boundary. Material fractions are computed in a user-specified voxel grid overlaying the scans. New materials are defined by mixing the library materials using the fractions. The geometry can be written as an MCNP lattice or as individual cells. A combination algorithm can be used to join neighbouring cells with the same material.
Haji-Saeed, B; Sengupta, S K; Testorf, M; Goodhue, W; Khoury, J; Woods, C L; Kierstead, J
2006-05-10
We propose and demonstrate a new photorefractive real-time holographic deconvolution technique for adaptive one-way image transmission through aberrating media by means of four-wave mixing. In contrast with earlier methods, which typically required various codings of the exact phase or two-way image transmission for correcting phase distortion, our technique relies on one-way image transmission through the use of exact phase information. Our technique can simultaneously correct both amplitude and phase distortions. We include several forms of image degradation, various test cases, and experimental results. We characterize the performance as a function of the input beam ratios for four metrics: signal-to-noise ratio, normalized root-mean-square error, edge restoration, and peak-to-total energy ratio. In our characterization we use false-color graphic images to display the best beam-intensity ratio two-dimensional region(s) for each of these metrics. Test cases are simulated at the optimal values of the beam-intensity ratios. We demonstrate our results through both experiment and computer simulation.
NASA Technical Reports Server (NTRS)
Grubbs, Guy II; Michell, Robert; Samara, Marilia; Hampton, Don; Jahn, Jorg-Micha
2016-01-01
A technique is presented for the periodic and systematic calibration of ground-based optical imagers. It is important to have a common system of units (Rayleighs or photon flux) for cross comparison as well as self-comparison over time. With the advancement in technology, the sensitivity of these imagers has improved so that stars can be used for more precise calibration. Background subtraction, flat fielding, star mapping, and other common techniques are combined in deriving a calibration technique appropriate for a variety of ground-based imager installations. Spectral (4278, 5577, and 8446 A ) ground-based imager data with multiple fields of view (19, 47, and 180 deg) are processed and calibrated using the techniques developed. The calibration techniques applied result in intensity measurements in agreement between different imagers using identical spectral filtering, and the intensity at each wavelength observed is within the expected range of auroral measurements. The application of these star calibration techniques, which convert raw imager counts into units of photon flux, makes it possible to do quantitative photometry. The computed photon fluxes, in units of Rayleighs, can be used for the absolute photometry between instruments or as input parameters for auroral electron transport models.
Nonlinear Model Predictive Control for Cooperative Control and Estimation
NASA Astrophysics Data System (ADS)
Ru, Pengkai
Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.
Efficient calculation of luminance variation of a luminaire that uses LED light sources
NASA Astrophysics Data System (ADS)
Goldstein, Peter
2007-09-01
Many luminaires have an array of LEDs that illuminate a lenslet-array diffuser in order to create the appearance of a single, extended source with a smooth luminance distribution. Designing such a system is challenging because luminance calculations for a lenslet array generally involve tracing millions of rays per LED, which is computationally intensive and time-consuming. This paper presents a technique for calculating an on-axis luminance distribution by tracing only one ray per LED per lenslet. A multiple-LED system is simulated with this method, and with Monte Carlo ray-tracing software for comparison. Accuracy improves, and computation time decreases by at least five orders of magnitude with this technique, which has applications in LED-based signage, displays, and general illumination.
Hydrologic Response to Climate Change: Missing Precipitation Data Matters for Computed Timing Trends
NASA Astrophysics Data System (ADS)
Daniels, B.
2016-12-01
This work demonstrates the derivation of climate timing statistics and applying them to determine resulting hydroclimate impacts. Long-term daily precipitation observations from 50 California stations were used to compute climate trends of precipitation event Intensity, event Duration and Pause between events. Each precipitation event trend was then applied as input to a PRMS hydrology model which showed hydrology changes to recharge, baseflow, streamflow, etc. An important concern was precipitation uncertainty induced by missing observation values and causing errors in quantification of precipitation trends. Many standard statistical techniques such as ARIMA and simple endogenous or even exogenous imputation were applied but failed to help resolve these uncertainties. What helped resolve these uncertainties was use of multiple imputation techniques. This involved fitting of Weibull probability distributions to multiple imputed values for the three precipitation trends.Permutation resampling techniques using Monte Carlo processing were then applied to the multiple imputation values to derive significance p-values for each trend. Significance at the 95% level for Intensity was found for 11 of the 50 stations, Duration from 16 of the 50, and Pause from 19, of which 12 were 99% significant. The significance weighted trends for California are Intensity -4.61% per decade, Duration +3.49% per decade, and Pause +3.58% per decade. Two California basins with PRMS hydrologic models were studied: Feather River in the northern Sierra Nevada mountains and the central coast Soquel-Aptos. Each local trend was changed without changing the other trends or the total precipitation. Feather River Basin's critical supply to Lake Oroville and the State Water Project benefited from a total streamflow increase of 1.5%. The Soquel-Aptos Basin water supply was impacted by a total groundwater recharge decrease of -7.5% and streamflow decrease of -3.2%.
Advanced ballistic range technology
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1993-01-01
Experimental interferograms, schlieren, and shadowgraphs are used for quantitative and qualitative flow-field studies. These images are created by passing light through a flow field, and the recorded intensity patterns are functions of the phase shift and angular deflection of the light. As part of the grant NCC2-583, techniques and software have been developed for obtaining phase shifts from finite-fringe interferograms and for constructing optical images from Computational Fluid Dynamics (CFD) solutions. During the period from 1 Nov. 1992 - 30 Jun. 1993, research efforts have been concentrated in improving these techniques.
Yang, Jiaheng; He, Xiaodong; Guo, Ruijun; Xu, Peng; Wang, Kunpeng; Sheng, Cheng; Liu, Min; Wang, Jin; Derevianko, Andrei; Zhan, Mingsheng
2016-09-16
We demonstrate that the coherence of a single mobile atomic qubit can be well preserved during a transfer process among different optical dipole traps (ODTs). This is a prerequisite step in realizing a large-scale neutral atom quantum information processing platform. A qubit encoded in the hyperfine manifold of an ^{87}Rb atom is dynamically extracted from the static quantum register by an auxiliary moving ODT and reinserted into the static ODT. Previous experiments were limited by decoherences induced by the differential light shifts of qubit states. Here, we apply a magic-intensity trapping technique which mitigates the detrimental effects of light shifts and substantially enhances the coherence time to 225±21 ms. The experimentally demonstrated magic trapping technique relies on the previously neglected hyperpolarizability contribution to the light shifts, which makes the light shift dependence on the trapping laser intensity parabolic. Because of the parabolic dependence, at a certain "magic" intensity, the first order sensitivity to trapping light-intensity variations over ODT volume is eliminated. We experimentally demonstrate the utility of this approach and measure hyperpolarizability for the first time. Our results pave the way for constructing scalable quantum-computing architectures with single atoms trapped in an array of magic ODTs.
An Improved Wake Vortex Tracking Algorithm for Multiple Aircraft
NASA Technical Reports Server (NTRS)
Switzer, George F.; Proctor, Fred H.; Ahmad, Nashat N.; LimonDuparcmeur, Fanny M.
2010-01-01
The accurate tracking of vortex evolution from Large Eddy Simulation (LES) data is a complex and computationally intensive problem. The vortex tracking requires the analysis of very large three-dimensional and time-varying datasets. The complexity of the problem is further compounded by the fact that these vortices are embedded in a background turbulence field, and they may interact with the ground surface. Another level of complication can arise, if vortices from multiple aircrafts are simulated. This paper presents a new technique for post-processing LES data to obtain wake vortex tracks and wake intensities. The new approach isolates vortices by defining "regions of interest" (ROI) around each vortex and has the ability to identify vortex pairs from multiple aircraft. The paper describes the new methodology for tracking wake vortices and presents application of the technique for single and multiple aircraft.
Rogers, Ian S.; Cury, Ricardo C.; Blankstein, Ron; Shapiro, Michael D.; Nieman, Koen; Hoffmann, Udo; Brady, Thomas J.; Abbara, Suhny
2010-01-01
Background Despite rapid advances in cardiac computed tomography (CT), a strategy for optimal visualization of perfusion abnormalities on CT has yet to be validated. Objective To evaluate the performance of several post-processing techniques of source data sets to detect and characterize perfusion defects in acute myocardial infarctions with cardiac CT. Methods Twenty-one subjects (18 men; 60 ± 13 years) that were successfully treated with percutaneous coronary intervention for ST-segment myocardial infarction underwent 64-slice cardiac CT and 1.5 Tesla cardiac MRI scans following revascularization. Delayed enhancement MRI images were analyzed to identify the location of infarcted myocardium. Contiguous short axis images of the left ventricular myocardium were created from the CT source images using 0.75mm multiplanar reconstruction (MPR), 5mm MPR, 5mm maximal intensity projection (MIP), and 5mm minimum intensity projection (MinIP) techniques. Segments already confirmed to contain infarction by MRI were then evaluated qualitatively and quantitatively with CT. Results Overall, 143 myocardial segments were analyzed. On qualitative analysis, the MinIP and thick MPR techniques had greater visibility and definition than the thin MPR and MIP techniques (p < 0.001). On quantitative analysis, the absolute difference in Hounsfield Unit (HU) attenuation between normal and infarcted segments was significantly greater for the MinIP (65.4 HU) and thin MPR (61.2 HU) techniques. However, the relative difference in HU attenuation was significantly greatest for the MinIP technique alone (95%, p < 0.001). Contrast to noise was greatest for the MinIP (4.2) and thick MPR (4.1) techniques (p < 0.001). Conclusion The results of our current investigation found that MinIP and thick MPR detected infarcted myocardium with greater visibility and definition than MIP and thin MPR. PMID:20579617
Diffraction of Harmonic Flexural Waves in a Cracked Elastic Plate Carrying Electrical Current
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; Hasanyan, Davresh; Librescu, iviu; Qin, Zhanming
2005-01-01
The scattering effect of harmonic flexural waves at a through crack in an elastic plate carrying electrical current is investigated. In this context, the Kirchhoffean bending plate theory is extended as to include magnetoelastic interactions. An incident wave giving rise to bending moments symmetric about the longitudinal z-axis of the crack is applied. Fourier transform technique reduces the problem to dual integral equations, which are then cast to a system of two singular integral equations. Efficient numerical computation is implemented to get the bending moment intensity factor for arbitrary frequency of the incident wave and of arbitrary electrical current intensity. The asymptotic behaviour of the bending moment intensity factor is analysed and parametric studies are conducted.
Concurrent Probabilistic Simulation of High Temperature Composite Structural Response
NASA Technical Reports Server (NTRS)
Abdi, Frank
1996-01-01
A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.
Noncontact methods for optical testing of convex aspheric mirrors for future large telescopes
NASA Astrophysics Data System (ADS)
Goncharov, Alexander V.; Druzhin, Vladislav V.; Batshev, Vladislav I.
2009-06-01
Non-contact methods for testing of large rotationally symmetric convex aspheric mirrors are proposed. These methods are based on non-null testing with side illumination schemes, in which a narrow collimated beam is reflected from the meridional aspheric profile of a mirror. The figure error of the mirror is deduced from the intensity pattern from the reflected beam obtained on a screen, which is positioned in the tangential plane (containing the optical axis) and perpendicular to the incoming beam. Testing of the entire surface is carried out by rotating the mirror about its optical axis and registering the characteristics of the intensity pattern on the screen. The intensity pattern can be formed using three different techniques: modified Hartman test, interference and boundary curve test. All these techniques are well known but have not been used in the proposed side illumination scheme. Analytical expressions characterizing the shape and location of the intensity pattern on the screen or a CCD have been developed for all types of conic surfaces. The main advantage of these testing methods compared with existing methods (Hindle sphere, null lens, computer generated hologram) is that the reference system does not require large optical components.
Lanczos eigensolution method for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1991-01-01
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.
Volumetric visualization algorithm development for an FPGA-based custom computing machine
NASA Astrophysics Data System (ADS)
Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim
1998-05-01
Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.
NASA Astrophysics Data System (ADS)
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.
2018-01-01
We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kung, Shiris Wai Sum; Wu, Vincent Wing Cheung; Kam, Michael Koon Ming, E-mail: kamkm@yahoo.co
2011-01-01
Purpose: Locally recurrent nasopharyngeal carcinoma (NPC) patients can be salvaged by reirradiation with a substantial degree of radiation-related complications. Stereotactic radiotherapy (SRT) is widely used in this regard because of its rapid dose falloff and high geometric precision. The aim of this study was to examine whether the newly developed intensity-modulated stereotactic radiotherapy (IMSRT) has any dosimetric advantages over three other stereotactic techniques, including circular arc (CARC), static conformal beam (SmMLC), and dynamic conformal arc (mARC), in treating locally recurrent NPC. Methods and Materials: Computed tomography images of 32 patients with locally recurrent NPC, previously treated with SRT, were retrievedmore » from the stereotactic planning system for contouring and computing treatment plans. Treatment planning of each patient was performed for the four treatment techniques: CARC, SmMLC, mARC, and IMSRT. The conformity index (CI) and homogeneity index (HI) of the planning target volume (PTV) and doses to the organs at risk (OARs) and normal tissue were compared. Results: All four techniques delivered adequate doses to the PTV. IMSRT, SmMLC, and mARC delivered reasonably conformal and homogenous dose to the PTV (CI <1.47, HI <0.53), but not for CARC (p < 0.05). IMSRT presented with the smallest CI (1.37) and HI (0.40). Among the four techniques, IMSRT spared the greatest number of OARs, namely brainstem, temporal lobes, optic chiasm, and optic nerve, and had the smallest normal tissue volume in the low-dose region. Conclusion: Based on the dosimetric comparison, IMSRT was optimal for locally recurrent NPC by delivering a conformal and homogenous dose to the PTV while sparing OARs.« less
An independent software system for the analysis of dynamic MR images.
Torheim, G; Lombardi, M; Rinck, P A
1997-01-01
A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.
Enabling NVM for Data-Intensive Scientific Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carns, Philip; Jenkins, John; Seo, Sangmin
Specialized, transient data services are playing an increasingly prominent role in data-intensive scientific computing. These services offer flexible, on-demand pairing of applications with storage hardware using semantics that are optimized for the problem domain. Concurrent with this trend, upcoming scientific computing and big data systems will be deployed with emerging NVM technology to achieve the highest possible price/productivity ratio. Clearly, therefore, we must develop techniques to facilitate the confluence of specialized data services and NVM technology. In this work we explore how to enable the composition of NVM resources within transient distributed services while still retaining their essential performance characteristics.more » Our approach involves eschewing the conventional distributed file system model and instead projecting NVM devices as remote microservices that leverage user-level threads, RPC services, RMA-enabled network transports, and persistent memory libraries in order to maximize performance. We describe a prototype system that incorporates these concepts, evaluate its performance for key workloads on an exemplar system, and discuss how the system can be leveraged as a component of future data-intensive architectures.« less
NASA Astrophysics Data System (ADS)
Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias
2018-04-01
This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.
Fast MPEG-CDVS Encoder With GPU-CPU Hybrid Computing.
Duan, Ling-Yu; Sun, Wei; Zhang, Xinfeng; Wang, Shiqi; Chen, Jie; Yin, Jianxiong; See, Simon; Huang, Tiejun; Kot, Alex C; Gao, Wen
2018-05-01
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of a CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of graphics processing unit (GPU). We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation as well as the memory access mechanism are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU for resolving the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which enables to leverage the advantages of GPU platforms harmoniously, and yield significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
Erasing the Milky Way: New Cleaning Technique Applied to GBT Intensity Mapping Data
NASA Technical Reports Server (NTRS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masi, K.W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.;
2016-01-01
We present the first application of a new foreground removal pipeline to the current leading HI intensity mapping dataset, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h field data of the GBT observations previously presented in Masui et al. (2013) and Switzer et al. (2013), covering about 41 square degrees at 0.6 less than z is less than 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point source contamination using an independent component analysis technique (fastica), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that fastica is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps is dominated by instrumental noise on small scales which fastica, as a conservative sub-traction technique of non-Gaussian signals, can not mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the Singular Value Decomposition (SVD) method, and confirm that foreground subtraction with fastica is robust against 21cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and fastica are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping datasets.
The correlation study of parallel feature extractor and noise reduction approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewi, Deshinta Arrova; Sundararajan, Elankovan; Prabuwono, Anton Satria
2015-05-15
This paper presents literature reviews that show variety of techniques to develop parallel feature extractor and finding its correlation with noise reduction approaches for low light intensity images. Low light intensity images are normally displayed as darker images and low contrast. Without proper handling techniques, those images regularly become evidences of misperception of objects and textures, the incapability to section them. The visual illusions regularly clues to disorientation, user fatigue, poor detection and classification performance of humans and computer algorithms. Noise reduction approaches (NR) therefore is an essential step for other image processing steps such as edge detection, image segmentation,more » image compression, etc. Parallel Feature Extractor (PFE) meant to capture visual contents of images involves partitioning images into segments, detecting image overlaps if any, and controlling distributed and redistributed segments to extract the features. Working on low light intensity images make the PFE face challenges and closely depend on the quality of its pre-processing steps. Some papers have suggested many well established NR as well as PFE strategies however only few resources have suggested or mentioned the correlation between them. This paper reviews best approaches of the NR and the PFE with detailed explanation on the suggested correlation. This finding may suggest relevant strategies of the PFE development. With the help of knowledge based reasoning, computational approaches and algorithms, we present the correlation study between the NR and the PFE that can be useful for the development and enhancement of other existing PFE.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyde, Derek E.; Habets, Damiaan F.; Fox, Allan J.
2007-07-15
Digital subtraction angiography is being supplanted by three-dimensional imaging techniques in many clinical applications, leading to extensive use of maximum intensity projection (MIP) images to depict volumetric vascular data. The MIP algorithm produces intensity profiles that are different than conventional angiograms, and can also increase the vessel-to-tissue contrast-to-noise ratio. We evaluated the effect of the MIP algorithm in a clinical application where quantitative vessel measurement is important: internal carotid artery stenosis grading. Three-dimensional computed rotational angiography (CRA) was performed on 26 consecutive symptomatic patients to verify an internal carotid artery stenosis originally found using duplex ultrasound. These volumes of datamore » were visualized using two different postprocessing projection techniques: MIP and digitally reconstructed radiographic (DRR) projection. A DRR is a radiographic image simulating a conventional digitally subtracted angiogram, but it is derived computationally from the same CRA dataset as the MIP. By visualizing a single volume with two different projection techniques, the postprocessing effect of the MIP algorithm is isolated. Vessel measurements were made, according to the NASCET guidelines, and percentage stenosis grades were calculated. The paired t-test was used to determine if the measurement difference between the two techniques was statistically significant. The CRA technique provided an isotropic voxel spacing of 0.38 mm. The MIPs and DRRs had a mean signal-difference-to-noise-ratio of 30:1 and 26:1, respectively. Vessel measurements from MIPs were, on average, 0.17 mm larger than those from DRRs (P<0.0001). The NASCET-type stenosis grades tended to be underestimated on average by 2.4% with the MIP algorithm, although this was not statistically significant (P=0.09). The mean interobserver variability (standard deviation) of both the MIP and DRR images was 0.35 mm. It was concluded that the MIP algorithm slightly increased the apparent dimensions of the arteries, when applied to these intra-arterial CRA images. This subpixel increase was smaller than both the voxel size and interobserver variability, and was therefore not clinically relevant.« less
Rapid prototyping--when virtual meets reality.
Beguma, Zubeda; Chhedat, Pratik
2014-01-01
Rapid prototyping (RP) describes the customized production of solid models using 3D computer data. Over the past decade, advances in RP have continued to evolve, resulting in the development of new techniques that have been applied to the fabrication of various prostheses. RP fabrication technologies include stereolithography (SLA), fused deposition modeling (FDM), computer numerical controlled (CNC) milling, and, more recently, selective laser sintering (SLS). The applications of RP techniques for dentistry include wax pattern fabrication for dental prostheses, dental (facial) prostheses mold (shell) fabrication, and removable dental prostheses framework fabrication. In the past, a physical plastic shape of the removable partial denture (RPD) framework was produced using an RP machine, and then used as a sacrificial pattern. Yet with the advent of the selective laser melting (SLM) technique, RPD metal frameworks can be directly fabricated, thereby omitting the casting stage. This new approach can also generate the wax pattern for facial prostheses directly, thereby reducing labor-intensive laboratory procedures. Many people stand to benefit from these new RP techniques for producing various forms of dental prostheses, which in the near future could transform traditional prosthodontic practices.
NASA Technical Reports Server (NTRS)
Hwu, Shian U.; Kelley, James S.; Panneton, Robert B.; Arndt, G. Dickey
1995-01-01
In order to estimate the RF radiation hazards to astronauts and electronics equipment due to various Space Station transmitters, the electric fields around the various Space Station antennas are computed using the rigorous Computational Electromagnetics (CEM) techniques. The Method of Moments (MoM) was applied to the UHF and S-band low gain antennas. The Aperture Integration (AI) method and the Geometrical Theory of Diffraction (GTD) method were used to compute the electric field intensities for the S- and Ku-band high gain antennas. As a result of this study, The regions in which the electric fields exceed the specified exposure levels for the Extravehicular Mobility Unit (EMU) electronics equipment and Extravehicular Activity (EVA) astronaut are identified for various Space Station transmitters.
Tofuku, Katsuhiro; Koga, Hiroaki; Komiya, Setsuro
2015-04-01
We aimed to evaluate the value of single-photon emission computed tomography (SPECT)/computed tomography (CT) for the diagnosis of sacroiliac joint (SIJ) dysfunction. SPECT/CT was performed in 32 patients with severe SIJ dysfunction, who did not respond to 1-year conservative treatment and had a score of >4 points on a 10-cm visual analog scale. We investigated the relationship between the presence of severe SIJ dysfunction and tracer accumulation, as confirmed by SPECT/CT. In cases of bilateral SIJ dysfunction, we also compared the intensity of tracer accumulation on each side. Moreover, we examined the relationship between the intensity of tracer accumulation and the different treatments the patients subsequently received. All 32 patients with severe SIJ dysfunction had tracer accumulation with a standardized uptake value (SUV) of >2.2 (mean SUV 4.7). In the 19 patients with lateralized symptom intensity, mean SUVs of the dominant side were significantly higher than those of the nondominant side. In 10 patients with no lateralization, the difference in the SUVs between sides was <0.6. Patients exhibiting higher levels of tracer accumulation required more advanced treatment. Patients with higher levels of tracer accumulation had greater symptom severity and also required more advanced treatment. Thus, we believe that SPECT/CT may be a suitable supplementary diagnostic modality for SIJ dysfunction as well as a useful technique for predicting the prognosis of this condition.
A video coding scheme based on joint spatiotemporal and adaptive prediction.
Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken
2009-05-01
We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed.
Early years of Computational Statistical Mechanics
NASA Astrophysics Data System (ADS)
Mareschal, Michel
2018-05-01
Evidence that a model of hard spheres exhibits a first-order solid-fluid phase transition was provided in the late fifties by two new numerical techniques known as Monte Carlo and Molecular Dynamics. This result can be considered as the starting point of computational statistical mechanics: at the time, it was a confirmation of a counter-intuitive (and controversial) theoretical prediction by J. Kirkwood. It necessitated an intensive collaboration between the Los Alamos team, with Bill Wood developing the Monte Carlo approach, and the Livermore group, where Berni Alder was inventing Molecular Dynamics. This article tells how it happened.
A micro-hydrology computation ordering algorithm
NASA Astrophysics Data System (ADS)
Croley, Thomas E.
1980-11-01
Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented "node" definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing microhydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies.
Integral-moment analysis of the BATSE gamma-ray burst intensity distribution
NASA Technical Reports Server (NTRS)
Horack, John M.; Emslie, A. Gordon
1994-01-01
We have applied the technique of integral-moment analysis to the intensity distribution of the first 260 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory. This technique provides direct measurement of properties such as the mean, variance, and skewness of the convolved luminosity-number density distribution, as well as associated uncertainties. Using this method, one obtains insight into the nature of the source distributions unavailable through computation of traditional single parameters such as V/V(sub max)). If the luminosity function of the gamma-ray bursts is strongly peaked, giving bursts only a narrow range of luminosities, these results are then direct probes of the radial distribution of sources, regardless of whether the bursts are a local phenomenon, are distributed in a galactic halo, or are at cosmological distances. Accordingly, an integral-moment analysis of the intensity distribution of the gamma-ray bursts provides for the most complete analytic description of the source distribution available from the data, and offers the most comprehensive test of the compatibility of a given hypothesized distribution with observation.
High-Accuracy Ultrasound Contrast Agent Detection Method for Diagnostic Ultrasound Imaging Systems.
Ito, Koichi; Noro, Kazumasa; Yanagisawa, Yukari; Sakamoto, Maya; Mori, Shiro; Shiga, Kiyoto; Kodama, Tetsuya; Aoki, Takafumi
2015-12-01
An accurate method for detecting contrast agents using diagnostic ultrasound imaging systems is proposed. Contrast agents, such as microbubbles, passing through a blood vessel during ultrasound imaging are detected as blinking signals in the temporal axis, because their intensity value is constantly in motion. Ultrasound contrast agents are detected by evaluating the intensity variation of a pixel in the temporal axis. Conventional methods are based on simple subtraction of ultrasound images to detect ultrasound contrast agents. Even if the subject moves only slightly, a conventional detection method will introduce significant error. In contrast, the proposed technique employs spatiotemporal analysis of the pixel intensity variation over several frames. Experiments visualizing blood vessels in the mouse tail illustrated that the proposed method performs efficiently compared with conventional approaches. We also report that the new technique is useful for observing temporal changes in microvessel density in subiliac lymph nodes containing tumors. The results are compared with those of contrast-enhanced computed tomography. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.
1994-01-01
Electronic and optoelectronic hardware implementations of highly parallel computing architectures address several ill-defined and/or computation-intensive problems not easily solved by conventional computing techniques. The concurrent processing architectures developed are derived from a variety of advanced computing paradigms including neural network models, fuzzy logic, and cellular automata. Hardware implementation technologies range from state-of-the-art digital/analog custom-VLSI to advanced optoelectronic devices such as computer-generated holograms and e-beam fabricated Dammann gratings. JPL's concurrent processing devices group has developed a broad technology base in hardware implementable parallel algorithms, low-power and high-speed VLSI designs and building block VLSI chips, leading to application-specific high-performance embeddable processors. Application areas include high throughput map-data classification using feedforward neural networks, terrain based tactical movement planner using cellular automata, resource optimization (weapon-target assignment) using a multidimensional feedback network with lateral inhibition, and classification of rocks using an inner-product scheme on thematic mapper data. In addition to addressing specific functional needs of DOD and NASA, the JPL-developed concurrent processing device technology is also being customized for a variety of commercial applications (in collaboration with industrial partners), and is being transferred to U.S. industries. This viewgraph p resentation focuses on two application-specific processors which solve the computation intensive tasks of resource allocation (weapon-target assignment) and terrain based tactical movement planning using two extremely different topologies. Resource allocation is implemented as an asynchronous analog competitive assignment architecture inspired by the Hopfield network. Hardware realization leads to a two to four order of magnitude speed-up over conventional techniques and enables multiple assignments, (many to many), not achievable with standard statistical approaches. Tactical movement planning (finding the best path from A to B) is accomplished with a digital two-dimensional concurrent processor array. By exploiting the natural parallel decomposition of the problem in silicon, a four order of magnitude speed-up over optimized software approaches has been demonstrated.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
Advanced decision aiding techniques applicable to space
NASA Technical Reports Server (NTRS)
Kruchten, Robert J.
1987-01-01
RADC has had an intensive program to show the feasibility of applying advanced technology to Air Force decision aiding situations. Some aspects of the program, such as Satellite Autonomy, are directly applicable to space systems. For example, RADC has shown the feasibility of decision aids that combine the advantages of laser disks and computer generated graphics; decision aids that interface object-oriented programs with expert systems; decision aids that solve path optimization problems; etc. Some of the key techniques that could be used in space applications are reviewed. Current applications are reviewed along with their advantages and disadvantages, and examples are given of possible space applications. The emphasis is to share RADC experience in decision aiding techniques.
Hierarchical tone mapping for high dynamic range image visualization
NASA Astrophysics Data System (ADS)
Qiu, Guoping; Duan, Jiang
2005-07-01
In this paper, we present a computationally efficient, practically easy to use tone mapping techniques for the visualization of high dynamic range (HDR) images in low dynamic range (LDR) reproduction devices. The new method, termed hierarchical nonlinear linear (HNL) tone-mapping operator maps the pixels in two hierarchical steps. The first step allocates appropriate numbers of LDR display levels to different HDR intensity intervals according to the pixel densities of the intervals. The second step linearly maps the HDR intensity intervals to theirs allocated LDR display levels. In the developed HNL scheme, the assignment of LDR display levels to HDR intensity intervals is controlled by a very simple and flexible formula with a single adjustable parameter. We also show that our new operators can be used for the effective enhancement of ordinary images.
Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy
NASA Astrophysics Data System (ADS)
Yang, Yu; Dong, Bin; Wen, Zaiwen
2017-02-01
In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.
Procedural wound geometry and blood flow generation for medical training simulators
NASA Astrophysics Data System (ADS)
Aras, Rifat; Shen, Yuzhong; Li, Jiang
2012-02-01
Efficient application of wound treatment procedures is vital in both emergency room and battle zone scenes. In order to train first responders for such situations, physical casualty simulation kits, which are composed of tens of individual items, are commonly used. Similar to any other training scenarios, computer simulations can be effective means for wound treatment training purposes. For immersive and high fidelity virtual reality applications, realistic 3D models are key components. However, creation of such models is a labor intensive process. In this paper, we propose a procedural wound geometry generation technique that parameterizes key simulation inputs to establish the variability of the training scenarios without the need of labor intensive remodeling of the 3D geometry. The procedural techniques described in this work are entirely handled by the graphics processing unit (GPU) to enable interactive real-time operation of the simulation and to relieve the CPU for other computational tasks. The visible human dataset is processed and used as a volumetric texture for the internal visualization of the wound geometry. To further enhance the fidelity of the simulation, we also employ a surface flow model for blood visualization. This model is realized as a dynamic texture that is composed of a height field and a normal map and animated at each simulation step on the GPU. The procedural wound geometry and the blood flow model are applied to a thigh model and the efficiency of the technique is demonstrated in a virtual surgery scene.
Computer-aided Assessment of Regional Abdominal Fat with Food Residue Removal in CT
Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi
2014-01-01
Rationale and Objectives Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Materials and Methods Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. Results We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Conclusions Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. PMID:24119354
Computer-aided assessment of regional abdominal fat with food residue removal in CT.
Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi
2013-11-01
Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. Published by Elsevier Inc.
Denoised Wigner distribution deconvolution via low-rank matrix completion
Lee, Justin; Barbastathis, George
2016-08-23
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Denoised Wigner distribution deconvolution via low-rank matrix completion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Justin; Barbastathis, George
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Design of wideband solar ultraviolet radiation intensity monitoring and control system
NASA Astrophysics Data System (ADS)
Ye, Linmao; Wu, Zhigang; Li, Yusheng; Yu, Guohe; Jin, Qi
2009-08-01
According to the principle of SCM (Single Chip Microcomputer) and computer communication technique, the system is composed of chips such as ATML89C51, ADL0809, integrated circuit and sensors for UV radiation, which is designed for monitoring and controlling the UV index. This system can automatically collect the UV index data, analyze and check the history database, research the law of UV radiation in the region.
Supercomputer applications in molecular modeling.
Gund, T M
1988-01-01
An overview of the functions performed by molecular modeling is given. Molecular modeling techniques benefiting from supercomputing are described, namely, conformation, search, deriving bioactive conformations, pharmacophoric pattern searching, receptor mapping, and electrostatic properties. The use of supercomputers for problems that are computationally intensive, such as protein structure prediction, protein dynamics and reactivity, protein conformations, and energetics of binding is also examined. The current status of supercomputing and supercomputer resources are discussed.
Miyazaki, Yoshiaki; Tabata, Nobuyuki; Taroura, Tomomi; Shinozaki, Kenji; Kubo, Yuichiro; Tokunaga, Eriko; Taguchi, Kenichi
We propose a computer-aided diagnostic (CAD) system that uses time-intensity curves to distinguish between benign and malignant mammary tumors. Many malignant tumors show a washout pattern in time-intensity curves. Therefore, we designed a program that automatically detects the position with the strongest washout effect using the technique, such as the subtraction technique, which extracts only the washout area in the tumor, and by scanning data in 2×2 pixel region of interest (ROI). Operation of this independently developed program was verified using a phantom system that simulated tumors. In three cases of malignant tumors, the washout pattern detection rate in images with manually set ROI was ≤6%, whereas the detection rate with our novel method was 100%. In one case of a benign tumor, when the same method was used, we checked that there was no washout effect and detected the persistent pattern. Thus, the distinction between benign and malignant tumors using our method was completely consistent with the pathological diagnoses made. Our novel method is therefore effective for differentiating between benign and malignant mammary tumors in dynamic magnetic resonance images.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
Plenoptic Ophthalmoscopy: A Novel Imaging Technique.
Adam, Murtaza K; Aenchbacher, Weston; Kurzweg, Timothy; Hsu, Jason
2016-11-01
This prospective retinal imaging case series was designed to establish feasibility of plenoptic ophthalmoscopy (PO), a novel mydriatic fundus imaging technique. A custom variable intensity LED array light source adapter was created for the Lytro Gen1 light-field camera (Lytro, Mountain View, CA). Initial PO testing was performed on a model eye and rabbit fundi. PO image acquisition was then performed on dilated human subjects with a variety of retinal pathology and images were subjected to computational enhancement. The Lytro Gen1 light-field camera with custom LED array captured fundus images of eyes with diabetic retinopathy, age-related macular degeneration, retinal detachment, and other diagnoses. Post-acquisition computational processing allowed for refocusing and perspective shifting of retinal PO images, resulting in improved image quality. The application of PO to image the ocular fundus is feasible. Additional studies are needed to determine its potential clinical utility. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:1038-1043.]. Copyright 2016, SLACK Incorporated.
Laminography using resonant neutron attenuation for detection of drugs and explosives
NASA Astrophysics Data System (ADS)
Loveman, R. A.; Feinstein, R. L.; Bendahan, J.; Gozani, T.; Shea, P.
1997-02-01
Resonant neutron attenuation has been shown to be usable for assaying elements which constitute explosives, cocaine, and heroin. By careful analysis of attenuation measurements, the determination of the presence or absence of explosives can be determined. Simple two dimensional radiographic techniques only give results for areal density and consequently will be limited in their effectiveness. Classical tomographic techniques are both computationally very intensive and place strict requirements on the quality and amount of data acquired. These requirements and computations take time and are likely to be very difficult to perform in real time. Simulation studies described in this article have shown that laminographic image reconstruction can be used effectively with resonant neutron attenuation measurements to interrogate luggage for explosives or drugs. The design of the system described in this article is capable of pseudo-three dimensional image reconstruction of all of the elemental densities pertinent to explosive and drug detection.
Multilayered analog optical differentiating device: performance analysis on structural parameters.
Wu, Wenhui; Jiang, Wei; Yang, Jiang; Gong, Shaoxiang; Ma, Yungui
2017-12-15
Analogy optical devices (AODs) able to do mathematical computations have recently gained strong research interest for their potential applications as accelerating hardware in traditional electronic computers. The performance of these wavefront-processing devices is primarily decided by the accuracy of the angular spectral engineering. In this Letter, we show that the multilayer technique could be a promising method to flexibly design AODs according to the input wavefront conditions. As examples, various Si-SiO 2 -based multilayer films are designed that can precisely perform the second-order differentiation for the input wavefronts of different Fourier spectrum widths. The minimum number and thickness uncertainty of sublayers for the device performance are discussed. A technique by rescaling the Fourier spectrum intensity has been proposed in order to further improve the practical feasibility. These results are thought to be instrumental for the development of AODs.
Computer-aided boundary delineation of agricultural lands
NASA Technical Reports Server (NTRS)
Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt
1989-01-01
The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.
Collaborative Research: Tomographic imaging of laser-plasma structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downer, Michael
The interaction of intense short laser pulses with ionized gases, or plasmas, underlies many applications such as acceleration of elementary particles, production of energy by laser fusion, generation of x-ray and far-infrared “terahertz” pulses for medical and materials probing, remote sensing of explosives and pollutants, and generation of guide stars. Such laser-plasma interactions create tiny electron density structures (analogous to the wake behind a boat) inside the plasma in the shape of waves, bubbles and filaments that move at the speed of light, and evolve as they propagate. Prior to recent work by the PI of this proposal, detailed knowledgemore » of such structures came exclusively from intensive computer simulations. Now “snapshots” of these elusive, light-velocity structures can be taken in the laboratory using dynamic variant of holography, the technique used to produce ID cards and DVDs, and dynamic variant of tomography, the technique used in medicine to image internal bodily organs. These fast visualization techniques are important for understanding, improving and scaling the above-mentioned applications of laser-plasma interactions. In this project, we accomplished three things: 1) We took holographic pictures of a laser-driven plasma-wave in the act of accelerating electrons to high energy, and used computer simulations to understand the pictures. 2) Using results from this experiment to optimize the performance of the accelerator, and the brightness of x-rays that it emits. These x-rays will be useful for medical and materials science applications. 3) We made technical improvements to the holographic technique that enables us to see finer details in the recorded pictures. Four refereed journal papers were published, and two students earned PhDs and moved on to scientific careers in US National Laboratories based on their work under this project.« less
NASA Astrophysics Data System (ADS)
Alvarenga de Moura Meneses, Anderson; Giusti, Alessandro; de Almeida, André Pereira; Parreira Nogueira, Liebert; Braz, Delson; Cely Barroso, Regina; deAlmeida, Carlos Eduardo
2011-12-01
Synchrotron Radiation (SR) X-ray micro-Computed Tomography (μCT) enables magnified images to be used as a non-invasive and non-destructive technique with a high space resolution for the qualitative and quantitative analyses of biomedical samples. The research on applications of segmentation algorithms to SR-μCT is an open problem, due to the interesting and well-known characteristics of SR images for visualization, such as the high resolution and the phase contrast effect. In this article, we describe and assess the application of the Energy Minimization via Graph Cuts (EMvGC) algorithm for the segmentation of SR-μCT biomedical images acquired at the Synchrotron Radiation for MEdical Physics (SYRMEP) beam line at the Elettra Laboratory (Trieste, Italy). We also propose a method using EMvGC with Artificial Neural Networks (EMANNs) for correcting misclassifications due to intensity variation of phase contrast, which are important effects and sometimes indispensable in certain biomedical applications, although they impair the segmentation provided by conventional techniques. Results demonstrate considerable success in the segmentation of SR-μCT biomedical images, with average Dice Similarity Coefficient 99.88% for bony tissue in Wistar Rats rib samples (EMvGC), as well as 98.95% and 98.02% for scans of Rhodnius prolixus insect samples (Chagas's disease vector) with EMANNs, in relation to manual segmentation. The techniques EMvGC and EMANNs cope with the task of performing segmentation in images with the intensity variation due to phase contrast effects, presenting a superior performance in comparison to conventional segmentation techniques based on thresholding and linear/nonlinear image filtering, which is also discussed in the present article.
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; ...
2017-09-14
In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao
In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Computational ghost imaging using deep learning
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi
2018-04-01
Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.
Tivnan, Matthew; Gurjar, Rajan; Wolf, David E; Vishwanath, Karthik
2015-08-12
Diffuse Correlation Spectroscopy (DCS) is a well-established optical technique that has been used for non-invasive measurement of blood flow in tissues. Instrumentation for DCS includes a correlation device that computes the temporal intensity autocorrelation of a coherent laser source after it has undergone diffuse scattering through a turbid medium. Typically, the signal acquisition and its autocorrelation are performed by a correlation board. These boards have dedicated hardware to acquire and compute intensity autocorrelations of rapidly varying input signal and usually are quite expensive. Here we show that a Raspberry Pi minicomputer can acquire and store a rapidly varying time-signal with high fidelity. We show that this signal collected by a Raspberry Pi device can be processed numerically to yield intensity autocorrelations well suited for DCS applications. DCS measurements made using the Raspberry Pi device were compared to those acquired using a commercial hardware autocorrelation board to investigate the stability, performance, and accuracy of the data acquired in controlled experiments. This paper represents a first step toward lowering the instrumentation cost of a DCS system and may offer the potential to make DCS become more widely used in biomedical applications.
Tivnan, Matthew; Gurjar, Rajan; Wolf, David E.; Vishwanath, Karthik
2015-01-01
Diffuse Correlation Spectroscopy (DCS) is a well-established optical technique that has been used for non-invasive measurement of blood flow in tissues. Instrumentation for DCS includes a correlation device that computes the temporal intensity autocorrelation of a coherent laser source after it has undergone diffuse scattering through a turbid medium. Typically, the signal acquisition and its autocorrelation are performed by a correlation board. These boards have dedicated hardware to acquire and compute intensity autocorrelations of rapidly varying input signal and usually are quite expensive. Here we show that a Raspberry Pi minicomputer can acquire and store a rapidly varying time-signal with high fidelity. We show that this signal collected by a Raspberry Pi device can be processed numerically to yield intensity autocorrelations well suited for DCS applications. DCS measurements made using the Raspberry Pi device were compared to those acquired using a commercial hardware autocorrelation board to investigate the stability, performance, and accuracy of the data acquired in controlled experiments. This paper represents a first step toward lowering the instrumentation cost of a DCS system and may offer the potential to make DCS become more widely used in biomedical applications. PMID:26274961
Antenna analysis using neural networks
NASA Technical Reports Server (NTRS)
Smith, William T.
1992-01-01
Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.
Antenna analysis using neural networks
NASA Astrophysics Data System (ADS)
Smith, William T.
1992-09-01
Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).
Efficient volumetric estimation from plenoptic data
NASA Astrophysics Data System (ADS)
Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.
2013-03-01
The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.
NASA Astrophysics Data System (ADS)
De Siena, Luca; Rawlinson, Nicholas
2016-04-01
Non-standard seismic imaging (velocity, attenuation, and scattering tomography) of the North Sea basins by using unexploited seismic intensities from previous passive and active surveys are key for better imaging and monitoring fluid under the subsurface. These intensities provide unique solutions to the problem of locating/tracking gas/fluid movements in the crust and depicting sub-basalt and sub-intrusives in volcanic reservoirs. The proposed techniques have been tested in volcanic Islands (Deception Island) and have been proved effective at monitoring fracture opening, imaging buried fluid-filled bodies, and tracking water/gas interfaces. These novel seismic attributes are modelled in space and time and connected with the lithology of the sampled medium, specifically density and permeability with as key output a novel computational code with strong commercial potential.
Calculating intensities using effective Hamiltonians in terms of Coriolis-adapted normal modes.
Karthikeyan, S; Krishnan, Mangala Sunder; Carrington, Tucker
2005-01-15
The calculation of rovibrational transition energies and intensities is often hampered by the fact that vibrational states are strongly coupled by Coriolis terms. Because it invalidates the use of perturbation theory for the purpose of decoupling these states, the coupling makes it difficult to analyze spectra and to extract information from them. One either ignores the problem and hopes that the effect of the coupling is minimal or one is forced to diagonalize effective rovibrational matrices (rather than diagonalizing effective rotational matrices). In this paper we apply a procedure, based on a quantum mechanical canonical transformation for deriving decoupled effective rotational Hamiltonians. In previous papers we have used this technique to compute energy levels. In this paper we show that it can also be applied to determine intensities. The ideas are applied to the ethylene molecule.
NASA Technical Reports Server (NTRS)
Wu, C. Y. R.; Ogawa, H. S.
1986-01-01
The sensitivity of the curve-of-growth (COG) technique utilized in rocket measurements to determine the line profiles of the solar He I resonance emissions is theoretically examined with attention to the possibility of determining the line core shape using this technique. The line at 584.334 A is chosen as an illustration. Various possible source functions of the solar line have been assumed in the computation of the integrated transmitted intensity. A recent observational data set obtained by the present researchers is used as the constraint of the computation. It is confirmed that the COG technique can indeed provide a good measurement of the solar line width. However, to obtain detailed knowledge of the solar profile at line center and in the core region, (1) it is necessary to be able to carry out relative solar flux measurements with a 1-percent or better precision, and (2) it must be possible to measure the He gas pressure in the absorption cell to lower than 0.1 mtorr. While these numbers apply specifically to the present geometry, the results are readily scaled to other COG measurements using other experimental parameters.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
An interactive computer code for simulation of a high-intensity turbulent combustor as a single point inhomogeneous stirred reactor was developed from an existing batch processing computer code CDPSR. The interactive CDPSR code was used as a guide for interpretation and direction of DOE-sponsored companion experiments utilizing Xenon tracer with optical laser diagnostic techniques to experimentally determine the appropriate mixing frequency, and for validation of CDPSR as a mixing-chemistry model for a laboratory jet-stirred reactor. The coalescence-dispersion model for finite rate mixing was incorporated into an existing interactive code AVCO-MARK I, to enable simulation of a combustor as a modular array of stirred flow and plug flow elements, each having a prescribed finite mixing frequency, or axial distribution of mixing frequency, as appropriate. Further increase the speed and reliability of the batch kinetics integrator code CREKID was increased by rewriting in vectorized form for execution on a vector or parallel processor, and by incorporating numerical techniques which enhance execution speed by permitting specification of a very low accuracy tolerance.
Challenges and solutions for realistic room simulation
NASA Astrophysics Data System (ADS)
Begault, Durand R.
2002-05-01
Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.
Monitoring Engine Vibrations And Spectrum Of Exhaust
NASA Technical Reports Server (NTRS)
Martinez, Carol L.; Randall, Michael R.; Reinert, John W.
1991-01-01
Real-time computation of intensities of peaks in visible-light emission spectrum of exhaust combined with real-time spectrum analysis of vibrations into developmental monitoring technique providing up-to-the-second information on conditions of critical bearings in engine. Conceived to monitor conditions of bearings in turbopump suppling oxygen to Space Shuttle main engine, based on observations that both vibrations in bearings and intensities of visible light emitted at specific wavelengths by exhaust plume of engine indicate wear and incipient failure of bearings. Applicable to monitoring "health" of other machinery via spectra of vibrations and electromagnetic emissions from exhausts. Concept related to one described in "Monitoring Bearing Vibrations For Signs Of Damage", (MFS-29734).
Lattice surgery on the Raussendorf lattice
NASA Astrophysics Data System (ADS)
Herr, Daniel; Paler, Alexandru; Devitt, Simon J.; Nori, Franco
2018-07-01
Lattice surgery is a method to perform quantum computation fault-tolerantly by using operations on boundary qubits between different patches of the planar code. This technique allows for universal planar code computation without eliminating the intrinsic two-dimensional nearest-neighbor properties of the surface code that eases physical hardware implementations. Lattice surgery approaches to algorithmic compilation and optimization have been demonstrated to be more resource efficient for resource-intensive components of a fault-tolerant algorithm, and consequently may be preferable over braid-based logic. Lattice surgery can be extended to the Raussendorf lattice, providing a measurement-based approach to the surface code. In this paper we describe how lattice surgery can be performed on the Raussendorf lattice and therefore give a viable alternative to computation using braiding in measurement-based implementations of topological codes.
A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems
NASA Technical Reports Server (NTRS)
Hatanaka, Iwao
2000-01-01
The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.
Safety Metrics for Human-Computer Controlled Systems
NASA Technical Reports Server (NTRS)
Leveson, Nancy G; Hatanaka, Iwao
2000-01-01
The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.
Visual saliency in MPEG-4 AVC video stream
NASA Astrophysics Data System (ADS)
Ammar, M.; Mitrea, M.; Hasnaoui, M.; Le Callet, P.
2015-03-01
Visual saliency maps already proved their efficiency in a large variety of image/video communication application fields, covering from selective compression and channel coding to watermarking. Such saliency maps are generally based on different visual characteristics (like color, intensity, orientation, motion,…) computed from the pixel representation of the visual content. This paper resumes and extends our previous work devoted to the definition of a saliency map solely extracted from the MPEG-4 AVC stream syntax elements. The MPEG-4 AVC saliency map thus defined is a fusion of static and dynamic map. The static saliency map is in its turn a combination of intensity, color and orientation features maps. Despite the particular way in which all these elementary maps are computed, the fusion techniques allowing their combination plays a critical role in the final result and makes the object of the proposed study. A total of 48 fusion formulas (6 for combining static features and, for each of them, 8 to combine static to dynamic features) are investigated. The performances of the obtained maps are evaluated on a public database organized at IRCCyN, by computing two objective metrics: the Kullback-Leibler divergence and the area under curve.
Lee, M-Y; Chang, C-C; Ku, Y C
2008-01-01
Fixed dental restoration by conventional methods greatly relies on the skill and experience of the dental technician. The quality and accuracy of the final product depends mostly on the technician's subjective judgment. In addition, the traditional manual operation involves many complex procedures, and is a time-consuming and labour-intensive job. Most importantly, no quantitative design and manufacturing information is preserved for future retrieval. In this paper, a new device for scanning the dental profile and reconstructing 3D digital information of a dental model based on a layer-based imaging technique, called abrasive computer tomography (ACT) was designed in-house and proposed for the design of custom dental restoration. The fixed partial dental restoration was then produced by rapid prototyping (RP) and computer numerical control (CNC) machining methods based on the ACT scanned digital information. A force feedback sculptor (FreeForm system, Sensible Technologies, Inc., Cambridge MA, USA), which comprises 3D Touch technology, was applied to modify the morphology and design of the fixed dental restoration. In addition, a comparison of conventional manual operation and digital manufacture using both RP and CNC machining technologies for fixed dental restoration production is presented. Finally, a digital custom fixed restoration manufacturing protocol integrating proposed layer-based dental profile scanning, computer-aided design, 3D force feedback feature modification and advanced fixed restoration manufacturing techniques is illustrated. The proposed method provides solid evidence that computer-aided design and manufacturing technologies may become a new avenue for custom-made fixed restoration design, analysis, and production in the 21st century.
NASA Astrophysics Data System (ADS)
Von Korff, J.; Demorest, P.; Heien, E.; Korpela, E.; Werthimer, D.; Cobb, J.; Lebofsky, M.; Anderson, D.; Bankay, B.; Siemion, A.
2013-04-01
We are performing a transient, microsecond timescale radio sky survey, called "Astropulse," using the Arecibo telescope. Astropulse searches for brief (0.4 μs to 204.8 μs ), wideband (relative to its 2.5 MHz bandwidth) radio pulses centered at 1420 MHz. Astropulse is a commensal (piggyback) survey, and scans the sky between declinations of -1.°33 and 38.°03. We obtained 1540 hr of data in each of seven beams of the ALFA receiver, with two polarizations per beam. The data are one-bit complex sampled at the Nyquist limit of 0.4 μs per sample. Examination of timescales on the order of microseconds is possible because we used coherent dedispersion, a technique that has frequently been used for targeted observations, but has never been associated with a radio sky survey. The more usual technique, incoherent dedispersion, cannot resolve signals below a minimum timescale which depends on the signal's dispersion measure (DM) and frequency. However, coherent dedispersion requires more intensive computation than incoherent dedispersion. The required processing power was provided by BOINC, the Berkeley Open Infrastructure for Network Computing. BOINC is a distributed computing system, allowing us to utilize hundreds of thousands of volunteers' computers to perform the necessary calculations for coherent dedispersion. Astrophysical events that might produce brief radio pulses include giant pulses from pulsars, rotating radio transients, exploding primordial black holes, or new sources yet to be imagined. Radio frequency interference and noise contaminate the data; these are mitigated by a number of techniques including multi-polarization correlation, DM repetition detection, and frequency profiling.
Accelerated computer generated holography using sparse bases in the STFT domain.
Blinder, David; Schelkens, Peter
2018-01-22
Computer-generated holography at high resolutions is a computationally intensive task. Efficient algorithms are needed to generate holograms at acceptable speeds, especially for real-time and interactive applications such as holographic displays. We propose a novel technique to generate holograms using a sparse basis representation in the short-time Fourier space combined with a wavefront-recording plane placed in the middle of the 3D object. By computing the point spread functions in the transform domain, we update only a small subset of the precomputed largest-magnitude coefficients to significantly accelerate the algorithm over conventional look-up table methods. We implement the algorithm on a GPU, and report a speedup factor of over 30. We show that this transform is superior over wavelet-based approaches, and show quantitative and qualitative improvements over the state-of-the-art WASABI method; we report accuracy gains of 2dB PSNR, as well improved view preservation.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
Meng, Qier; Kitasaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Ueno, Junji; Mori, Kensaku
2017-02-01
Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.
Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen
2006-04-01
Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.
NASA Astrophysics Data System (ADS)
Hirota, Koji
We demonstrate a computationally-efficient method for optical coherence elastography (OCE) based on fringe washout method for a spectral-domain OCT (SD-OCT) system. By sending short pulses of mechanical perturbation with ultrasound or shock wave during the image acquisition of alternating depth profiles, we can extract cross-sectional mechanical assessment of tissue in real-time. This was achieved through a simple comparison of the intensity in adjacent depth profiles acquired during the states of perturbation and non-perturbation in order to quantify the degree of induced fringe washout. Although the results indicate that our OCE technique based on the fringe washout effect is sensitive enough to detect mechanical property changes in biological samples, there is some loss of sensitivity in comparison to previous techniques in order to achieve computationally efficiency and minimum modification in both hardware and software in the OCT system. The tissue phantom study was carried with various agar density samples to characterize our OCE technique. Young's modulus measurements were achieved with the atomic force microscopy (AFM) to correlate to our OCE assessment. Knee cartilage samples of monosodium iodoacetate (MIA) rat models were utilized to replicate cartilage damage of a human model. Our proposed OCE technique along with intensity and AFM measurements were applied to the MIA models to assess the damage. The results from both the phantom study and MIA model study demonstrated the strong capability to assess the changes in mechanical properties of the OCE technique. The correlation between the OCE measurements and the Young's modulus values demonstrated in the OCE data that the stiffer material had less magnitude of fringe washout effect. This result is attributed to the fringe washout effect caused by axial motion that the displacement of the scatterers in the stiffer samples in response to the external perturbation induces less fringe washout effect.
The MIT/OSO 7 catalog of X-ray sources - Intensities, spectra, and long-term variability
NASA Technical Reports Server (NTRS)
Markert, T. H.; Laird, F. N.; Clark, G. W.; Hearn, D. R.; Sprott, G. F.; Li, F. K.; Bradt, H. V.; Lewin, W. H. G.; Schnopper, H. W.; Winkler, P. F.
1979-01-01
This paper is a summary of the observations of the cosmic X-ray sky performed by the MIT 1-40-keV X-ray detectors on OSO 7 between October 1971 and May 1973. Specifically, mean intensities or upper limits of all third Uhuru or OSO 7 cataloged sources (185 sources) in the 3-10-keV range are computed. For those sources for which a statistically significant (greater than 20) intensity was found in the 3-10-keV band (138 sources), further intensity determinations were made in the 1-15-keV, 1-6-keV, and 15-40-keV energy bands. Graphs and other simple techniques are provided to aid the user in converting the observed counting rates to convenient units and in determining spectral parameters. Long-term light curves (counting rates in one or more energy bands as a function of time) are plotted for 86 of the brighter sources.
Gaussian representation of high-intensity focused ultrasound beams.
Soneson, Joshua E; Myers, Matthew R
2007-11-01
A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.
Hack, Erwin; Gundu, Phanindra Narayan; Rastogi, Pramod
2005-05-10
An innovative technique for reducing speckle noise and improving the intensity profile of the speckle correlation fringes is presented. The method is based on reducing the range of the modulation intensity values of the speckle interference pattern. After the fringe pattern is corrected adaptively at each pixel, a simple morphological filtering of the fringes is sufficient to obtain smoothed fringes. The concept is presented both analytically and by simulation by using computer-generated speckle patterns. The experimental verification is performed by using an amplitude-only spatial light modulator (SLM) in a conventional electronic speckle pattern interferometry setup. The optical arrangement for tuning a commercially available LCD array for amplitude-only behavior is described. The method of feedback to the LCD SLM to modulate the intensity of the reference beam in order to reduce the modulation intensity values is explained, and the resulting fringe pattern and increase in the signal-to-noise ratio are discussed.
Spacecraft thermal balance testing using infrared sources
NASA Technical Reports Server (NTRS)
Tan, G. B. T.; Walker, J. B.
1982-01-01
A thermal balance test (controlled flux intensity) on a simple black dummy spacecraft using IR lamps was performed and evaluated, the latter being aimed specifically at thermal mathematical model (TMM) verification. For reference purposes the model was also subjected to a solar simulation test (SST). The results show that the temperature distributions measured during IR testing for two different model attitudes under steady state conditions are reproducible with a TMM. The TMM test data correlation is not as accurate for IRT as for SST. Using the standard deviation of the temperature difference distribution (analysis minus test) the SST data correlation is better by a factor of 1.8 to 2.5. The lower figure applies to the measured and the higher to the computer-generated IR flux intensity distribution. Techniques of lamp power control are presented. A continuing work program is described which is aimed at quantifying the differences between solar simulation and infrared techniques for a model representing the thermal radiating surfaces of a large communications spacecraft.
The MST radar technique: Requirements for operational weather forecasting
NASA Technical Reports Server (NTRS)
Larsen, M. F.
1983-01-01
There is a feeling that the accuracy of mesoscale forecasts for spatial scales of less than 1000 km and time scales of less than 12 hours can be improved significantly if resources are applied to the problem in an intensive effort over the next decade. Since the most dangerous and damaging types of weather occur at these scales, there are major advantages to be gained if such a program is successful. The interest in improving short term forecasting is evident. The technology at the present time is sufficiently developed, both in terms of new observing systems and the computing power to handle the observations, to warrant an intensive effort to improve stormscale forecasting. An assessment of the extent to which the so-called MST radar technique fulfills the requirements for an operational mesoscale observing network is reviewed and the extent to which improvements in various types of forecasting could be expected if such a network is put into operation are delineated.
Precision cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Fendt, William Ashton, Jr.
2009-09-01
Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.
High Performance Parallel Computational Nanotechnology
NASA Technical Reports Server (NTRS)
Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.
Rapid insights from remote sensing in the geosciences
NASA Astrophysics Data System (ADS)
Plaza, Antonio
2015-03-01
The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Dept. of Energy's National Nuclear Security Admin. under Contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Aditya, M. R.; Hernina, R.; Rokhmatuloh
2017-12-01
Rapid development in Jakarta which generates more impervious surface has reduced the amount of rainfall infiltration into soil layer and increases run-off. In some events, continuous high rainfall intensity could create sudden flood in Jakarta City. This article used rainfall data of Jakarta during 10 February 2015 to compute rainfall intensity and then interpolate it with ordinary kriging technique. Spatial distribution of rainfall intensity then overlaid with run-off coefficient based on certain land use type of the study area. Peak run-off within each cell resulted from hydrologic rational model then summed for the whole study area to generate total peak run-off. For this study area, land use types consisted of 51.9 % industrial, 37.57% parks, and 10.54% residential with estimated total peak run-off 6.04 m3/sec, 0.39 m3/sec, and 0.31 m3/sec, respectively.
Study of Submicron Particle Size Distributions by Laser Doppler Measurement of Brownian Motion.
1984-10-29
o ..... . 5-1 A.S *6NEW DISCOVERIES OR INVENTIONS .. o......... ......... 6-1 APPENDIX: COMPUTER SIMULATION OF THE BROWNIAN MOTION SENSOR SIGNALS...scattering regime by analysis of the scattered light intensity and particle mass (size) obtained using the Brownian motion sensor . 9 Task V - By application...of the Brownian motion sensor in a flat-flame burner, the contractor shall assess the application of this technique for In-situ sizing of submicron
Parallel Computing Strategies for Irregular Algorithms
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)
2002-01-01
Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.
QoS support for end users of I/O-intensive applications using shared storage systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Marion Kei; Zhang, Xuechen; Jiang, Song
2011-01-19
I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less
Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; White, Stuart C.
1992-05-01
This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.
NASA Astrophysics Data System (ADS)
Stone, S.; Parker, M. S.; Howe, B.; Lazowska, E.
2015-12-01
Rapid advances in technology are transforming nearly every field from "data-poor" to "data-rich." The ability to extract knowledge from this abundance of data is the cornerstone of 21st century discovery. At the University of Washington eScience Institute, our mission is to engage researchers across disciplines in developing and applying advanced computational methods and tools to real world problems in data-intensive discovery. Our research team consists of individuals with diverse backgrounds in domain sciences such as astronomy, oceanography and geology, with complementary expertise in advanced statistical and computational techniques such as data management, visualization, and machine learning. Two key elements are necessary to foster careers in data science: individuals with cross-disciplinary training in both method and domain sciences, and career paths emphasizing alternative metrics for advancement. We see persistent and deep-rooted challenges for the career paths of people whose skills, activities and work patterns don't fit neatly into the traditional roles and success metrics of academia. To address these challenges the eScience Institute has developed training programs and established new career opportunities for data-intensive research in academia. Our graduate students and post-docs have mentors in both a methodology and an application field. They also participate in coursework and tutorials to advance technical skill and foster community. Professional Data Scientist positions were created to support research independence while encouraging the development and adoption of domain-specific tools and techniques. The eScience Institute also supports the appointment of faculty who are innovators in developing and applying data science methodologies to advance their field of discovery. Our ultimate goal is to create a supportive environment for data science in academia and to establish global recognition for data-intensive discovery across all fields.
The NASA Energy Conservation Program
NASA Technical Reports Server (NTRS)
Gaffney, G. P.
1977-01-01
Large energy-intensive research and test equipment at NASA installations is identified, and methods for reducing energy consumption outlined. However, some of the research facilities are involved in developing more efficient, fuel-conserving aircraft, and tradeoffs between immediate and long-term conservation may be necessary. Major programs for conservation include: computer-based systems to automatically monitor and control utility consumption; a steam-producing solid waste incinerator; and a computer-based cost analysis technique to engineer more efficient heating and cooling of buildings. Alternate energy sources in operation or under evaluation include: solar collectors; electric vehicles; and ultrasonically emulsified fuel to attain higher combustion efficiency. Management support, cooperative participation by employees, and effective reporting systems for conservation programs, are also discussed.
Applications in Data-Intensive Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.
2010-04-01
This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less
NASA Astrophysics Data System (ADS)
Nakamura, T. K. M.; Nakamura, R.; Varsani, A.; Genestreti, K. J.; Baumjohann, W.; Liu, Y.-H.
2018-05-01
A remote sensing technique to infer the local reconnection electric field based on in situ multipoint spacecraft observation at the reconnection separatrix is proposed. In this technique, the increment of the reconnected magnetic flux is estimated by integrating the in-plane magnetic field during the sequential observation of the separatrix boundary by multipoint measurements. We tested this technique by applying it to virtual observations in a two-dimensional fully kinetic particle-in-cell simulation of magnetic reconnection without a guide field and confirmed that the estimated reconnection electric field indeed agrees well with the exact value computed at the X-line. We then applied this technique to an event observed by the Magnetospheric Multiscale mission when crossing an energetic plasma sheet boundary layer during an intense substorm. The estimated reconnection electric field for this event is nearly 1 order of magnitude higher than a typical value of magnetotail reconnection.
Detecting and disentangling nonlinear structure from solar flux time series
NASA Technical Reports Server (NTRS)
Ashrafi, S.; Roszman, L.
1992-01-01
Interest in solar activity has grown in the past two decades for many reasons. Most importantly for flight dynamics, solar activity changes the atmospheric density, which has important implications for spacecraft trajectory and lifetime prediction. Building upon the previously developed Rayleigh-Benard nonlinear dynamic solar model, which exhibits many dynamic behaviors observed in the Sun, this work introduces new chaotic solar forecasting techniques. Our attempt to use recently developed nonlinear chaotic techniques to model and forecast solar activity has uncovered highly entangled dynamics. Numerical techniques for decoupling additive and multiplicative white noise from deterministic dynamics and examines falloff of the power spectra at high frequencies as a possible means of distinguishing deterministic chaos from noise than spectrally white or colored are presented. The power spectral techniques presented are less cumbersome than current methods for identifying deterministic chaos, which require more computationally intensive calculations, such as those involving Lyapunov exponents and attractor dimension.
Jo, Javier A.; Fang, Qiyin; Marcu, Laura
2007-01-01
We report a new deconvolution method for fluorescence lifetime imaging microscopy (FLIM) based on the Laguerre expansion technique. The performance of this method was tested on synthetic and real FLIM images. The following interesting properties of this technique were demonstrated. 1) The fluorescence intensity decay can be estimated simultaneously for all pixels, without a priori assumption of the decay functional form. 2) The computation speed is extremely fast, performing at least two orders of magnitude faster than current algorithms. 3) The estimated maps of Laguerre expansion coefficients provide a new domain for representing FLIM information. 4) The number of images required for the analysis is relatively small, allowing reduction of the acquisition time. These findings indicate that the developed Laguerre expansion technique for FLIM analysis represents a robust and extremely fast deconvolution method that enables practical applications of FLIM in medicine, biology, biochemistry, and chemistry. PMID:19444338
A Computational Approach for Probabilistic Analysis of Water Impact Simulations
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.
2009-01-01
NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.
Rapid scanning system for fuel drawers
Caldwell, J.T.; Fehlau, P.E.; France, S.W.
A nondestructive method for uniquely distinguishing among and quantifying the mass of individual fuel plates in situ in fuel drawers utilized in nuclear reactors is described. The method is both rapid and passive, eliminating the personnel hazard of the commonly used irradiation techniques which require that the analysis be performed in proximity to an intense neutron source such as a reactor. In the present technique, only normally decaying nuclei are observed. This allows the analysis to be performed anywhere. This feature, combined with rapid scanning of a given fuel drawer (in approximately 30 s), and the computer data analysis allows the processing of large numbers of fuel drawers efficiently in the event of a loss alert.
Rapid scanning system for fuel drawers
Caldwell, John T.; Fehlau, Paul E.; France, Stephen W.
1981-01-01
A nondestructive method for uniqely distinguishing among and quantifying the mass of individual fuel plates in situ in fuel drawers utilized in nuclear reactors is described. The method is both rapid and passive, eliminating the personnel hazard of the commonly used irradiation techniques which require that the analysis be performed in proximity to an intense neutron source such as a reactor. In the present technique, only normally decaying nuclei are observed. This allows the analysis to be performed anywhere. This feature, combined with rapid scanning of a given fuel drawer (in approximately 30 s), and the computer data analysis allows the processing of large numbers of fuel drawers efficiently in the event of a loss alert.
Cyclone Simulation via Action Minimization
NASA Astrophysics Data System (ADS)
Plotkin, D. A.; Weare, J.; Abbot, D. S.
2016-12-01
A postulated impact of climate change is an increase in intensity of tropical cyclones (TCs). This hypothesized effect results from the fact that TCs are powered subsaturated boundary layer air picking up water vapor from the surface ocean as it flows inwards towards the eye. This water vapor serves as the energy input for TCs, which can be idealized as heat engines. The inflowing air has a nearly identical temperature as the surface ocean; therefore, warming of the surface leads to a warmer atmospheric boundary layer. By the Clausius-Clapeyron relationship, warmer boundary layer air can hold more water vapor and thus results in more energetic storms. Changes in TC intensity are difficult to predict due to the presence of fine structures (e.g. convective structures and rainbands) with length scales of less than 1 km, while general circulation models (GCMs) generally have horizontal resolutions of tens of kilometers. The models are therefore unable to capture these features, which are critical to accurately simulating cyclone structure and intensity. Further, strong TCs are rare events, meaning that long multi-decadal simulations are necessary to generate meaningful statistics about intense TC activity. This adds to the computational expense, making it yet more difficult to generate accurate statistics about long-term changes in TC intensity due to global warming via direct simulation. We take an alternative approach, applying action minimization techniques developed in molecular dynamics to the WRF weather/climate model. We construct artificial model trajectories that lead from quiescent (TC-free) states to TC states, then minimize the deviation of these trajectories from true model dynamics. We can thus create Monte Carlo model ensembles that are biased towards cyclogenesis, which reduces computational expense by limiting time spent in non-TC states. This allows for: 1) selective interrogation of model states with TCs; 2) finding the likeliest paths for transitions between TC-free and TC states; and 3) an increase in horizontal resolution due to computational savings achieved by reducing time spent simulating TC-free states. This increase in resolution, coupled with a decrease in simulation time, allows for prediction of the change in TC frequency and intensity distributions resulting from climate change.
Ohto, Tatsuhiko; Usui, Kota; Hasegawa, Taisuke; Bonn, Mischa; Nagata, Yuki
2015-09-28
Interfacial water structures have been studied intensively by probing the O-H stretch mode of water molecules using sum-frequency generation (SFG) spectroscopy. This surface-specific technique is finding increasingly widespread use, and accordingly, computational approaches to calculate SFG spectra using molecular dynamics (MD) trajectories of interfacial water molecules have been developed and employed to correlate specific spectral signatures with distinct interfacial water structures. Such simulations typically require relatively long (several nanoseconds) MD trajectories to allow reliable calculation of the SFG response functions through the dipole moment-polarizability time correlation function. These long trajectories limit the use of computationally expensive MD techniques such as ab initio MD and centroid MD simulations. Here, we present an efficient algorithm determining the SFG response from the surface-specific velocity-velocity correlation function (ssVVCF). This ssVVCF formalism allows us to calculate SFG spectra using a MD trajectory of only ∼100 ps, resulting in the substantial reduction of the computational costs, by almost an order of magnitude. We demonstrate that the O-H stretch SFG spectra at the water-air interface calculated by using the ssVVCF formalism well reproduce those calculated by using the dipole moment-polarizability time correlation function. Furthermore, we applied this ssVVCF technique for computing the SFG spectra from the ab initio MD trajectories with various density functionals. We report that the SFG responses computed from both ab initio MD simulations and MD simulations with an ab initio based force field model do not show a positive feature in its imaginary component at 3100 cm(-1).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Von Korff, J.; Heien, E.; Korpela, E.
We are performing a transient, microsecond timescale radio sky survey, called 'Astropulse', using the Arecibo telescope. Astropulse searches for brief (0.4 {mu}s to 204.8 {mu}s ), wideband (relative to its 2.5 MHz bandwidth) radio pulses centered at 1420 MHz. Astropulse is a commensal (piggyback) survey, and scans the sky between declinations of -1. Degree-Sign 33 and 38. Degree-Sign 03. We obtained 1540 hr of data in each of seven beams of the ALFA receiver, with two polarizations per beam. The data are one-bit complex sampled at the Nyquist limit of 0.4 {mu}s per sample. Examination of timescales on the ordermore » of microseconds is possible because we used coherent dedispersion, a technique that has frequently been used for targeted observations, but has never been associated with a radio sky survey. The more usual technique, incoherent dedispersion, cannot resolve signals below a minimum timescale which depends on the signal's dispersion measure (DM) and frequency. However, coherent dedispersion requires more intensive computation than incoherent dedispersion. The required processing power was provided by BOINC, the Berkeley Open Infrastructure for Network Computing. BOINC is a distributed computing system, allowing us to utilize hundreds of thousands of volunteers' computers to perform the necessary calculations for coherent dedispersion. Astrophysical events that might produce brief radio pulses include giant pulses from pulsars, rotating radio transients, exploding primordial black holes, or new sources yet to be imagined. Radio frequency interference and noise contaminate the data; these are mitigated by a number of techniques including multi-polarization correlation, DM repetition detection, and frequency profiling.« less
Optical design of cipher block chaining (CBC) encryption mode by using digital holography
NASA Astrophysics Data System (ADS)
Gil, Sang Keun; Jeon, Seok Hee; Jung, Jong Rae; Kim, Nam
2016-03-01
We propose an optical design of cipher block chaining (CBC) encryption by using digital holographic technique, which has higher security than the conventional electronic method because of the analog-type randomized cipher text with 2-D array. In this paper, an optical design of CBC encryption mode is implemented by 2-step quadrature phase-shifting digital holographic encryption technique using orthogonal polarization. A block of plain text is encrypted with the encryption key by applying 2-step phase-shifting digital holography, and it is changed into cipher text blocks which are digital holograms. These ciphered digital holograms with the encrypted information are Fourier transform holograms and are recorded on CCDs with 256 gray levels quantized intensities. The decryption is computed by these encrypted digital holograms of cipher texts, the same encryption key and the previous cipher text. Results of computer simulations are presented to verify that the proposed method shows the feasibility in the high secure CBC encryption system.
3D multiscale crack propagation using the XFEM applied to a gas turbine blade
NASA Astrophysics Data System (ADS)
Holl, Matthias; Rogge, Timo; Loehnert, Stefan; Wriggers, Peter; Rolfes, Raimund
2014-01-01
This work presents a new multiscale technique to investigate advancing cracks in three dimensional space. This fully adaptive multiscale technique is designed to take into account cracks of different length scales efficiently, by enabling fine scale domains locally in regions of interest, i.e. where stress concentrations and high stress gradients occur. Due to crack propagation, these regions change during the simulation process. Cracks are modeled using the extended finite element method, such that an accurate and powerful numerical tool is achieved. Restricting ourselves to linear elastic fracture mechanics, the -integral yields an accurate solution of the stress intensity factors, and with the criterion of maximum hoop stress, a precise direction of growth. If necessary, the on the finest scale computed crack surface is finally transferred to the corresponding scale. In a final step, the model is applied to a quadrature point of a gas turbine blade, to compute crack growth on the microscale of a real structure.
NASA Astrophysics Data System (ADS)
Bismuth, Vincent; Vancamberg, Laurence; Gorges, Sébastien
2009-02-01
During interventional radiology procedures, guide-wires are usually inserted into the patients vascular tree for diagnosis or healing purpose. These procedures are monitored with an Xray interventional system providing images of the interventional devices navigating through the patient's body. The automatic detection of such tools by image processing means has gained maturity over the past years and enables applications ranging from image enhancement to multimodal image fusion. Sophisticated detection methods are emerging, which rely on a variety of device enhancement techniques. In this article we reviewed and classified these techniques into three families. We chose a state of the art approach in each of them and built a rigorous framework to compare their detection capability and their computational complexity. Through simulations and the intensive use of ROC curves we demonstrated that the Hessian based methods are the most robust to strong curvature of the devices and that the family of rotated filters technique is the most suited for detecting low CNR and low curvature devices. The steerable filter approach demonstrated less interesting detection capabilities and appears to be the most expensive one to compute. Finally we demonstrated the interest of automatic guide-wire detection on a clinical topic: the compensation of respiratory motion in multimodal image fusion.
Augmentation of Early Intensity Forecasting in Tropical Cyclones
2011-09-30
modeled storms to the measured signatures. APPROACH The deviation-angle variance technique was introduced in Pineros et al. (2008) as a procedure to...the algorithm developed in the first year of the project. The new method used best-track storm fixes as the points to compute the DAV signal. We...In the North Atlantic basin, RMSE for tropical storm category is 11 kt, hurricane categories 1-3 is 12.5 kt, category 4 is 18 kt and category 5 is
Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.
Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.
Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang
2017-03-01
Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor.
Semivariogram Analysis of Bone Images Implemented on FPGA Architectures
Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang
2016-01-01
Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O (n2) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor. PMID:28428829
Differential modal Zernike wavefront sensor employing a computer-generated hologram: a proposal.
Mishra, Sanjay K; Bhatt, Rahul; Mohan, Devendra; Gupta, Arun Kumar; Sharma, Anurag
2009-11-20
The process of Zernike mode detection with a Shack-Hartmann wavefront sensor is computationally extensive. A holographic modal wavefront sensor has therefore evolved to process the data optically by use of the concept of equal and opposite phase bias. Recently, a multiplexed computer-generated hologram (CGH) technique was developed in which the output is in the form of bright dots that specify the presence and strength of a specific Zernike mode. We propose a wavefront sensor using the concept of phase biasing in the latter technique such that the output is a pair of bright dots for each mode to be sensed. A normalized difference signal between the intensities of the two dots is proportional to the amplitude of the sensed Zernike mode. In our method the number of holograms to be multiplexed is decreased, thereby reducing the modal cross talk significantly. We validated the proposed method through simulation studies for several cases. The simulation results demonstrate simultaneous wavefront detection of lower-order Zernike modes with a resolution better than lambda/50 for the wide measurement range of +/-3.5lambda with much reduced cross talk at high speed.
Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation
NASA Astrophysics Data System (ADS)
Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia
2013-08-01
Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data
NASA Technical Reports Server (NTRS)
Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.
2003-01-01
Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques
Non-local means denoising of dynamic PET images.
Dutta, Joyita; Leahy, Richard M; Li, Quanzheng
2013-01-01
Dynamic positron emission tomography (PET), which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM). NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch. To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches. The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high intensity details while lowering the background noise variance.
Nonlinear multilayers as optical limiters
NASA Astrophysics Data System (ADS)
Turner-Valle, Jennifer Anne
1998-10-01
In this work we present a non-iterative technique for computing the steady-state optical properties of nonlinear multilayers and we examine nonlinear multilayer designs for optical limiters. Optical limiters are filters with intensity-dependent transmission designed to curtail the transmission of incident light above a threshold irradiance value in order to protect optical sensors from damage due to intense light. Thin film multilayers composed of nonlinear materials exhibiting an intensity-dependent refractive index are used as the basis for optical limiter designs in order to enhance the nonlinear filter response by magnifying the electric field in the nonlinear materials through interference effects. The nonlinear multilayer designs considered in this work are based on linear optical interference filter designs which are selected for their spectral properties and electric field distributions. Quarter wave stacks and cavity filters are examined for their suitability as sensor protectors and their manufacturability. The underlying non-iterative technique used to calculate the optical response of these filters derives from recognizing that the multi-valued calculation of output irradiance as a function of incident irradiance may be turned into a single-valued calculation of incident irradiance as a function of output irradiance. Finally, the benefits and drawbacks of using nonlinear multilayer for optical limiting are examined and future research directions are proposed.
Study of photon correlation techniques for processing of laser velocimeter signals
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1977-01-01
The objective was to provide the theory and a system design for a new type of photon counting processor for low level dual scatter laser velocimeter (LV) signals which would be capable of both the first order measurements of mean flow and turbulence intensity and also the second order time statistics: cross correlation auto correlation, and related spectra. A general Poisson process model for low level LV signals and noise which is valid from the photon-resolved regime all the way to the limiting case of nonstationary Gaussian noise was used. Computer simulation algorithms and higher order statistical moment analysis of Poisson processes were derived and applied to the analysis of photon correlation techniques. A system design using a unique dual correlate and subtract frequency discriminator technique is postulated and analyzed. Expectation analysis indicates that the objective measurements are feasible.
Gain in computational efficiency by vectorization in the dynamic simulation of multi-body systems
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Shareef, N. H.
1991-01-01
An improved technique for the identification and extraction of the exact quantities associated with the degrees of freedom at the element as well as the flexible body level is presented. It is implemented in the dynamic equations of motions based on the recursive formulation of Kane et al. (1987) and presented in a matrix form, integrating the concepts of strain energy, the finite-element approach, modal analysis, and reduction of equations. This technique eliminates the CPU intensive matrix multiplication operations in the code's hot spots for the dynamic simulation of the interconnected rigid and flexible bodies. A study of a simple robot with flexible links is presented by comparing the execution times on a scalar machine and a vector-processor with and without vector options. Performance figures demonstrating the substantial gains achieved by the technique are plotted.
A Causal Relation between Bioluminescence and Oxygen to Quantify the Cell Niche
Lambrechts, Dennis; Roeffaers, Maarten; Goossens, Karel; Hofkens, Johan; Van de Putte, Tom; Schrooten, Jan; Van Oosterwyck, Hans
2014-01-01
Bioluminescence imaging assays have become a widely integrated technique to quantify effectiveness of cell-based therapies by monitoring fate and survival of transplanted cells. To date these assays are still largely qualitative and often erroneous due to the complexity and dynamics of local micro-environments (niches) in which the cells reside. Here, we report, using a combined experimental and computational approach, on oxygen that besides being a critical niche component responsible for cellular energy metabolism and cell-fate commitment, also serves a primary role in regulating bioluminescent light kinetics. We demonstrate the potential of an oxygen dependent Michaelis-Menten relation in quantifying intrinsic bioluminescence intensities by resolving cell-associated oxygen gradients from bioluminescent light that is emitted from three-dimensional (3D) cell-seeded hydrogels. Furthermore, the experimental and computational data indicate a strong causal relation of oxygen concentration with emitted bioluminescence intensities. Altogether our approach demonstrates the importance of oxygen to evolve towards quantitative bioluminescence and holds great potential for future microscale measurement of oxygen tension in an easily accessible manner. PMID:24840204
NASA Astrophysics Data System (ADS)
Gardezi, A.; Umer, T.; Butt, F.; Young, R. C. D.; Chatwin, C. R.
2016-04-01
A spatial domain optimal trade-off Maximum Average Correlation Height (SPOT-MACH) filter has been previously developed and shown to have advantages over frequency domain implementations in that it can be made locally adaptive to spatial variations in the input image background clutter and normalised for local intensity changes. The main concern for using the SPOT-MACH is its computationally intensive nature. However in the past enhancements techniques were proposed for the SPOT-MACH to make its execution time comparable to its frequency domain counterpart. In this paper a novel approach is discussed which uses VANET parameters coupled with the SPOT-MACH in order to minimise the extensive processing of the large video dataset acquired from the Pakistan motorways surveillance system. The use of VANET parameters gives us an estimation criterion of the flow of traffic on the Pakistan motorway network and acts as a precursor to the training algorithm. The use of VANET in this scenario would contribute heavily towards the computational complexity minimization of the proposed monitoring system.
A causal relation between bioluminescence and oxygen to quantify the cell niche.
Lambrechts, Dennis; Roeffaers, Maarten; Goossens, Karel; Hofkens, Johan; Vande Velde, Greetje; Van de Putte, Tom; Schrooten, Jan; Van Oosterwyck, Hans
2014-01-01
Bioluminescence imaging assays have become a widely integrated technique to quantify effectiveness of cell-based therapies by monitoring fate and survival of transplanted cells. To date these assays are still largely qualitative and often erroneous due to the complexity and dynamics of local micro-environments (niches) in which the cells reside. Here, we report, using a combined experimental and computational approach, on oxygen that besides being a critical niche component responsible for cellular energy metabolism and cell-fate commitment, also serves a primary role in regulating bioluminescent light kinetics. We demonstrate the potential of an oxygen dependent Michaelis-Menten relation in quantifying intrinsic bioluminescence intensities by resolving cell-associated oxygen gradients from bioluminescent light that is emitted from three-dimensional (3D) cell-seeded hydrogels. Furthermore, the experimental and computational data indicate a strong causal relation of oxygen concentration with emitted bioluminescence intensities. Altogether our approach demonstrates the importance of oxygen to evolve towards quantitative bioluminescence and holds great potential for future microscale measurement of oxygen tension in an easily accessible manner.
Usefulness of image morphing techniques in cancer treatment by conformal radiotherapy
NASA Astrophysics Data System (ADS)
Atoui, Hussein; Sarrut, David; Miguet, Serge
2004-05-01
Conformal radiotherapy is a cancer treatment technique, that targets high-energy X-rays to tumors with minimal exposure to surrounding healthy tissues. Irradiation ballistics is calculated based on an initial 3D Computerized Tomography (CT) scan. At every treatment session, the random positioning of the patient, compared to the reference position defined by the initial 3D CT scan, can generate treatment inaccuracies. Positioning errors potentially predispose to dangerous exposure to healthy tissues as well as insufficient irradiation to the tumor. A proposed solution would be the use of portal images generated by Electronic Portal Imaging Devices (EPID). Portal images (PI) allow a comparison with reference images retained by physicians, namely Digitally Reconstructed Radiographs (DRRs). At present, physicians must estimate patient positional errors by visual inspection. However, this may be inaccurate and consumes time. The automation of this task has been the subject of many researches. Unfortunately, the intensive use of DRRs and the high computing time required have prevented real time implementation. We are currently investigating a new method for DRR generation that calculates intermediate DRRs by 2D deformation of previously computed DRRs. We approach this investigation with the use of a morphing-based technique named mesh warping.
Measurement of the absorption coefficient using the sound-intensity technique
NASA Technical Reports Server (NTRS)
Atwal, M.; Bernhard, R.
1984-01-01
The possibility of using the sound intensity technique to measure the absorption coefficient of a material is investigated. This technique measures the absorption coefficient by measuring the intensity incident on the sample and the net intensity reflected by the sample. Results obtained by this technique are compared with the standard techniques of measuring the change in the reverberation time and the standing wave ratio in a tube, thereby, calculating the random incident and the normal incident adsorption coefficient.
NASA Astrophysics Data System (ADS)
Zhang, Chao; Yao, Huajian; Liu, Qinya; Zhang, Ping; Yuan, Yanhua O.; Feng, Jikun; Fang, Lihua
2018-01-01
We present a 2-D ambient noise adjoint tomography technique for a linear array with a significant reduction in computational cost and show its application to an array in North China. We first convert the observed data for 3-D media, i.e., surface-wave empirical Green's functions (EGFs) to the reconstructed EGFs (REGFs) for 2-D media using a 3-D/2-D transformation scheme. Different from the conventional steps of measuring phase dispersion, this technology refines 2-D shear wave speeds along the profile directly from REGFs. With an initial model based on traditional ambient noise tomography, adjoint tomography updates the model by minimizing the frequency-dependent Rayleigh wave traveltime delays between the REGFs and synthetic Green functions calculated by the spectral-element method. The multitaper traveltime difference measurement is applied in four-period bands: 20-35 s, 15-30 s, 10-20 s, and 6-15 s. The recovered model shows detailed crustal structures including pronounced low-velocity anomalies in the lower crust and a gradual crust-mantle transition zone beneath the northern Trans-North China Orogen, which suggest the possible intense thermo-chemical interactions between mantle-derived upwelling melts and the lower crust, probably associated with the magmatic underplating during the Mesozoic to Cenozoic evolution of this region. To our knowledge, it is the first time that ambient noise adjoint tomography is implemented for a 2-D medium. Compared with the intensive computational cost and storage requirement of 3-D adjoint tomography, this method offers a computationally efficient and inexpensive alternative to imaging fine-scale crustal structures beneath linear arrays.
NASA Astrophysics Data System (ADS)
Uijlenhoet, R.; Overeem, A.; Leijnse, H.; Rios Gaona, M. F.
2017-12-01
The basic principle of rainfall estimation using microwave links is as follows. Rainfall attenuates the electromagnetic signals transmitted from one telephone tower to another. By measuring the received power at one end of a microwave link as a function of time, the path-integrated attenuation due to rainfall can be calculated, which can be converted to average rainfall intensities over the length of a link. Microwave links from cellular communication networks have been proposed as a promising new rainfall measurement technique for one decade. They are particularly interesting for those countries where few surface rainfall observations are available. Yet to date no operational (real-time) link-based rainfall products are available. To advance the process towards operational application and upscaling of this technique, there is a need for freely available, user-friendly computer code for microwave link data processing and rainfall mapping. Such software is now available as R package "RAINLINK" on GitHub (https://github.com/overeem11/RAINLINK). It contains a working example to compute link-based 15-min rainfall maps for the entire surface area of The Netherlands for 40 hours from real microwave link data. This is a working example using actual data from an extensive network of commercial microwave links, for the first time, which will allow users to test their own algorithms and compare their results with ours. The package consists of modular functions, which facilitates running only part of the algorithm. The main processings steps are: 1) Preprocessing of link data (initial quality and consistency checks); 2) Wet-dry classification using link data; 3) Reference signal determination; 4) Removal of outliers ; 5) Correction of received signal powers; 6) Computation of mean path-averaged rainfall intensities; 7) Interpolation of rainfall intensities ; 8) Rainfall map visualisation. Some applications of RAINLINK will be shown based on microwave link data from a temperate climate (the Netherlands), and from a subtropical climate (Brazil). We hope that RAINLINK will promote the application of rainfall monitoring using microwave links in poorly gauged regions around the world. We invite researchers to contribute to RAINLINK to make the code more generally applicable to data from different networks and climates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Damien C., E-mail: damien.weber@unige.ch; Zilli, Thomas; Vallee, Jean Paul
2012-11-01
Purpose: Rectal toxicity is a serious adverse effect in early-stage prostate cancer patients treated with curative radiation therapy (RT). Injecting a spacer between Denonvilliers' fascia increases the distance between the prostate and the anterior rectal wall and may thus decrease the rectal radiation-induced toxicity. We assessed the dosimetric impact of this spacer with advanced delivery RT techniques, including intensity modulated RT (IMRT), volumetric modulated arc therapy (VMAT), and intensity modulated proton beam RT (IMPT). Methods and Materials: Eight prostate cancer patients were simulated for RT with or without spacer. Plans were computed for IMRT, VMAT, and IMPT using the Eclipsemore » treatment planning system using both computed tomography spacer+ and spacer- data sets. Prostate {+-} seminal vesicle planning target volume [PTV] and organs at risk (OARs) dose-volume histograms were calculated. The results were analyzed using dose and volume metrics for comparative planning. Results: Regardless of the radiation technique, spacer injection decreased significantly the rectal dose in the 60- to 70-Gy range. Mean V{sub 70Gy} and V{sub 60Gy} with IMRT, VMAT, and IMPT planning were 5.3 {+-} 3.3%/13.9 {+-} 10.0%, 3.9 {+-} 3.2%/9.7 {+-} 5.7%, and 5.0 {+-} 3.5%/9.5 {+-} 4.7% after spacer injection. Before spacer administration, the corresponding values were 9.8 {+-} 5.4% (P=.012)/24.8 {+-} 7.8% (P=.012), 10.1 {+-} 3.0% (P=.002)/17.9 {+-} 3.9% (P=.003), and 9.7 {+-} 2.6% (P=.003)/14.7% {+-} 2.7% (P=.003). Importantly, spacer injection usually improved the PTV coverage for IMRT. With this technique, mean V{sub 70.2Gy} (P=.07) and V{sub 74.1Gy} (P=0.03) were 100 {+-} 0% to 99.8 {+-} 0.2% and 99.1 {+-} 1.2% to 95.8 {+-} 4.6% with and without Spacer, respectively. As a result of spacer injection, bladder doses were usually higher but not significantly so. Only IMPT managed to decrease the rectal dose after spacer injection for all dose levels, generally with no observed increase to the bladder dose. Conclusions: Regardless of the radiation technique, a substantial decrease of rectal dose was observed after spacer injection for curative RT to the prostate.« less
Efficient variable time-stepping scheme for intense field-atom interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerjan, C.; Kosloff, R.
1993-03-01
The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less
Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix
NASA Astrophysics Data System (ADS)
Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia
2011-03-01
During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.
Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Zagaris, George
2009-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Probabilistic Structural Analysis Theory Development
NASA Technical Reports Server (NTRS)
Burnside, O. H.
1985-01-01
The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.
Building a Data Science capability for USGS water research and communication
NASA Astrophysics Data System (ADS)
Appling, A.; Read, E. K.
2015-12-01
Interpreting and communicating water issues in an era of exponentially increasing information requires a blend of domain expertise, computational proficiency, and communication skills. The USGS Office of Water Information has established a Data Science team to meet these needs, providing challenging careers for diverse domain scientists and innovators in the fields of information technology and data visualization. Here, we detail the experience of building a Data Science capability as a bridging element between traditional water resources analyses and modern computing tools and data management techniques. This approach includes four major components: 1) building reusable research tools, 2) documenting data-intensive research approaches in peer reviewed journals, 3) communicating complex water resources issues with interactive web visualizations, and 4) offering training programs for our peers in scientific computing. These components collectively improve the efficiency, transparency, and reproducibility of USGS data analyses and scientific workflows.
Domain Decomposition By the Advancing-Partition Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
NASA Technical Reports Server (NTRS)
Berg, Wesley; Avery, Susan K.
1994-01-01
Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the Special Sensor Microwave/Imager (SSM/I) for the preiod from July 1987 through December 1991. The monthly estimates were calibrated using measurements from a network of Pacific atoll rain gauges and compared to other satellite-based rainfall estimation techniques. Based on these monthly estimates, an analysis of the variability of large-scale features over intraseasonal to interannual timescales has been performed. While the major precipitation features as well as the seasonal variability distributions show good agreement with expected values, the presence of a moderately intense El Nino during 1986-87 and an intense La Nina during 1988-89 highlights this time period.
The Integrated Radiation Mapper Assistant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlton, R.E.; Tripp, L.R.
1995-03-01
The Integrated Radiation Mapper Assistant (IRMA) system combines state-of-the-art radiation sensors and microprocessor based analysis techniques to perform radiation surveys. Control of the survey function is from a control station located outside the radiation thus reducing time spent in radiation areas performing radiation surveys. The system consists of a directional radiation sensor, a laser range finder, two area radiation sensors, and a video camera mounted on a pan and tilt platform. THis sensor package is deployable on a remotely operated vehicle. The outputs of the system are radiation intensity maps identifying both radiation source intensities and radiation levels throughout themore » room being surveyed. After completion of the survey, the data can be removed from the control station computer for further analysis or archiving.« less
Learning moment-based fast local binary descriptor
NASA Astrophysics Data System (ADS)
Bellarbi, Abdelkader; Zenati, Nadia; Otmane, Samir; Belghit, Hayet
2017-03-01
Recently, binary descriptors have attracted significant attention due to their speed and low memory consumption; however, using intensity differences to calculate the binary descriptive vector is not efficient enough. We propose an approach to binary description called POLAR_MOBIL, in which we perform binary tests between geometrical and statistical information using moments in the patch instead of the classical intensity binary test. In addition, we introduce a learning technique used to select an optimized set of binary tests with low correlation and high variance. This approach offers high distinctiveness against affine transformations and appearance changes. An extensive evaluation on well-known benchmark datasets reveals the robustness and the effectiveness of the proposed descriptor, as well as its good performance in terms of low computation complexity when compared with state-of-the-art real-time local descriptors.
Enabling the High Level Synthesis of Data Analytics Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino
Conventional High Level Synthesis (HLS) tools mainly tar- get compute intensive kernels typical of digital signal pro- cessing applications. We are developing techniques and ar- chitectural templates to enable HLS of data analytics appli- cations. These applications are memory intensive, present fine-grained, unpredictable data accesses, and irregular, dy- namic task parallelism. We discuss an architectural tem- plate based around a distributed controller to efficiently ex- ploit thread level parallelism. We present a memory in- terface that supports parallel memory subsystems and en- ables implementing atomic memory operations. We intro- duce a dynamic task scheduling approach to efficiently ex- ecute heavilymore » unbalanced workload. The templates are val- idated by synthesizing queries from the Lehigh University Benchmark (LUBM), a well know SPARQL benchmark.« less
Method and apparatus for millimeter-wave detection of thermal waves for materials evaluation
Gopalsami, Nachappa; Raptis, Apostolos C.
1991-01-01
A method and apparatus for generating thermal waves in a sample and for measuring thermal inhomogeneities at subsurface levels using millimeter-wave radiometry. An intensity modulated heating source is oriented toward a narrow spot on the surface of a material sample and thermal radiation in a narrow volume of material around the spot is monitored using a millimeter-wave radiometer; the radiometer scans the sample point-by-point and a computer stores and displays in-phase and quadrature phase components of thermal radiations for each point on the scan. Alternatively, an intensity modulated heating source is oriented toward a relatively large surface area in a material sample and variations in thermal radiation within the full field of an antenna array are obtained using an aperture synthesis radiometer technique.
Room airflow studies using sonic anemometry.
Wasiolek, P T; Whicker, J J; Gong, H; Rodgers, J C
1999-06-01
To ensure prompt response by real-time air monitors to an accidental release of toxic aerosols in a workplace, safety professionals should understand airflow patterns. This understanding can be achieved with validated computational fluid dynamics (CFD) computer simulations, or with experimental techniques, such as measurements with smoke, neutrally buoyant markers, trace gases, or trace aerosol particles. As a supplementary technique to quantify airflows, the use of a state-of-the art, three-dimensional sonic anemometer was explored. This instrument allows for the precise measurements of the air-velocity vector components in the range of a few centimeters per second, which is common in many indoor work environments. Measurements of air velocities and directions at selected locations were made for the purpose of providing data for characterizing fundamental aspects of indoor air movement in two ventilated rooms and for comparison to CFD model predictions. One room was a mockup of a plutonium workroom, and the other was an actual functioning plutonium workroom. In the mockup room, air-velocity vector components were measured at 19 locations at three heights (60, 120 and 180 cm) with average velocities varying from 1.4 cm s-1 to 9.7 cm s-1. There were complex flow patterns observed with turbulence intensities from 39% up to 108%. In the plutonium workroom, measurements were made at the breathing-zone height, recording average velocities ranging from 9.9 cm s-1 to 35.5 cm s-1 with turbulence intensities from 33% to 108%.
Simulation training tools for nonlethal weapons using gaming environments
NASA Astrophysics Data System (ADS)
Donne, Alexsana; Eagan, Justin; Tse, Gabriel; Vanderslice, Tom; Woods, Jerry
2006-05-01
Modern simulation techniques have a growing role for evaluating new technologies and for developing cost-effective training programs. A mission simulator facilitates the productive exchange of ideas by demonstration of concepts through compellingly realistic computer simulation. Revolutionary advances in 3D simulation technology have made it possible for desktop computers to process strikingly realistic and complex interactions with results depicted in real-time. Computer games now allow for multiple real human players and "artificially intelligent" (AI) simulated robots to play together. Advances in computer processing power have compensated for the inherent intensive calculations required for complex simulation scenarios. The main components of the leading game-engines have been released for user modifications, enabling game enthusiasts and amateur programmers to advance the state-of-the-art in AI and computer simulation technologies. It is now possible to simulate sophisticated and realistic conflict situations in order to evaluate the impact of non-lethal devices as well as conflict resolution procedures using such devices. Simulations can reduce training costs as end users: learn what a device does and doesn't do prior to use, understand responses to the device prior to deployment, determine if the device is appropriate for their situational responses, and train with new devices and techniques before purchasing hardware. This paper will present the status of SARA's mission simulation development activities, based on the Half-Life gameengine, for the purpose of evaluating the latest non-lethal weapon devices, and for developing training tools for such devices.
ExoMol molecular line lists - XXVII: spectra of C2H4
NASA Astrophysics Data System (ADS)
Mant, Barry P.; Yachmenev, Andrey; Yurchenko, Jonathan Tennyson Sergei N.
2018-05-01
A new line list for ethylene, 12C21H4 is presented. The line list is based on high level ab initiopotential energy and dipole moment surfaces. The potential energy surface is refined by fitting to experimental energies. The line list covers the range up to 7000 cm-1(1.43 μm) with all ro-vibrational transitions (50 billion) with the lower state below 5000 cm-1included and thus should be applicable for temperatures up to 700 K. A technique for computing molecular opacities from vibrational band intensities is proposed and used to provide temperature dependent cross sections of ethylene for shorter wavelength and higher temperatures. When combined with realistic band profiles (such as the proposed three-band model), the vibrational intensity technique offers a cheap but reasonably accurate alternative to the full ro-vibrational calculations at high temperatures and should be reliable for representing molecular opacities. The C2H4 line list, which is called MaYTY, is rmade available in electronic form from the CDS (http://cdsarc.u-strasbg.fr) and ExoMol (www.exomol.com) databases.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.
Scharfe, Michael; Pielot, Rainer; Schreiber, Falk
2010-01-11
Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.
Real-time multiple human perception with color-depth cameras on a mobile robot.
Zhang, Hao; Reardon, Christopher; Parker, Lynne E
2013-10-01
The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.
Rapid extraction of image texture by co-occurrence using a hybrid data structure
NASA Astrophysics Data System (ADS)
Clausi, David A.; Zhao, Yongping
2002-07-01
Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.
Grayscale lithography-automated mask generation for complex three-dimensional topography
NASA Astrophysics Data System (ADS)
Loomis, James; Ratnayake, Dilan; McKenna, Curtis; Walsh, Kevin M.
2016-01-01
Grayscale lithography is a relatively underutilized technique that enables fabrication of three-dimensional (3-D) microstructures in photosensitive polymers (photoresists). By spatially modulating ultraviolet (UV) dosage during the writing process, one can vary the depth at which photoresist is developed. This means complex structures and bioinspired designs can readily be produced that would otherwise be cost prohibitive or too time intensive to fabricate. The main barrier to widespread grayscale implementation, however, stems from the laborious generation of mask files required to create complex surface topography. We present a process and associated software utility for automatically generating grayscale mask files from 3-D models created within industry-standard computer-aided design (CAD) suites. By shifting the microelectromechanical systems (MEMS) design onus to commonly used CAD programs ideal for complex surfacing, engineering professionals already familiar with traditional 3-D CAD software can readily utilize their pre-existing skills to make valuable contributions to the MEMS community. Our conversion process is demonstrated by prototyping several samples on a laser pattern generator-capital equipment already in use in many foundries. Finally, an empirical calibration technique is shown that compensates for nonlinear relationships between UV exposure intensity and photoresist development depth as well as a thermal reflow technique to help smooth microstructure surfaces.
Sound Diffraction Modeling of Rotorcraft Noise Around Terrain
NASA Technical Reports Server (NTRS)
Stephenson, James H.; Sim, Ben W.; Chitta, Subhashini; Steinhoff, John
2017-01-01
A new computational technique, Wave Confinement (WC), is extended here to account for sound diffraction around arbitrary terrain. While diffraction around elementary scattering objects, such as a knife edge, single slit, disc, sphere, etc. has been studied for several decades, realistic environments still pose significant problems. This new technique is first validated against Sommerfeld's classical problem of diffraction due to a knife edge. This is followed by comparisons with diffraction over three-dimensional smooth obstacles, such as a disc and Gaussian hill. Finally, comparisons with flight test acoustics data measured behind a hill are also shown. Comparison between experiment and Wave Confinement prediction demonstrates that a Poisson spot occurred behind the isolated hill, resulting in significantly increased sound intensity near the center of the shadowed region.
Automatic video segmentation and indexing
NASA Astrophysics Data System (ADS)
Chahir, Youssef; Chen, Liming
1999-08-01
Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.
Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo
NASA Astrophysics Data System (ADS)
Masters, Barry R.
1998-10-01
The problem of three-dimensional visualization of a human lens in vivo has been solved by a technique of volume rendering a transformed series of 60 rotated Scheimpflug (a dual slit reflected light microscope) digital images. The data set was obtained by rotating the Scheimpflug camera about the optic axis of the lens in 3 degree increments. The transformed set of optical sections were first aligned to correct for small eye movements, and then rendered into a volume reconstruction with volume rendering computer graphics techniques. To help visualize the distribution of lens opacities (cataracts) in the living, human lens the intensity of light scattering was pseudocolor coded and the cataract opacities were displayed as a movie.
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.
2014-04-01
We developed a code for imaging the surfaces of spotted stars by a set of circular spots with a uniform temperature distribution. The flux from the spotted surface is computed by partitioning the spots into elementary areas. The code takes into account the passing of spots behind the visible stellar limb, limb darkening, and overlapping of spots. Modeling of light curves includes the use of recent results of the theory of stellar atmospheres needed to take into account the temperature dependence of flux intensity and limb darkening coefficients. The search for spot parameters is based on the analysis of several light curves obtained in different photometric bands. We test our technique by applying it to HII 1883.
ERIC Educational Resources Information Center
Buche, Mari W.; Davis, Larry R.; Vician, Chelley
2007-01-01
Computers are pervasive in business and education, and it would be easy to assume that all individuals embrace technology. However, evidence shows that roughly 30 to 40 percent of individuals experience some level of computer anxiety. Many academic programs involve computing-intensive courses, but the actual effects of this exposure on computer…
NASA Astrophysics Data System (ADS)
De Siena, Luca; Sketsiou, Panayiota
2017-04-01
We plan the application of a joint velocity, attenuation, and scattering tomography to the North Sea basins. By using seismic phases and intensities from previous passive and active surveys our aim is to image and monitor fluids under the subsurface. Seismic intensities provide unique solutions to the problem of locating/tracking gas/fluid movements in the volcanoes and depicting sub-basalt and sub-intrusives in volcanic reservoirs. The proposed techniques have been tested in volcanic Islands (Deception Island), continental calderas (Campi Flegrei) and Quaternary Volcanoes (Mount. St. Helens) and have been proved effective at monitoring fracture opening, imaging buried fluid-filled bodies, and tracking water/gas interfaces. These novel seismic attributes are modelled in space and time and connected with the lithology of the sampled medium, specifically density and permeability, with as key output a novel computational code with strong commercial potential. Data are readily available in the framework of the NERC CDT Oil & Gas project.
Mass preserving registration for lung CT
NASA Astrophysics Data System (ADS)
Gorbunova, Vladlena; Lo, Pechin; Loeve, Martine; Tiddens, Harm A.; Sporring, Jon; Nielsen, Mads; de Bruijne, Marleen
2009-02-01
In this paper, we evaluate a novel image registration method on a set of expiratory-inspiratory pairs of computed tomography (CT) lung scans. A free-form multi resolution image registration technique is used to match two scans of the same subject. To account for the differences in the lung intensities due to differences in inspiration level, we propose to adjust the intensity of lung tissue according to the local expansion or compression. An image registration method without intensity adjustment is compared to the proposed method. Both approaches are evaluated on a set of 10 pairs of expiration and inspiration CT scans of children with cystic fibrosis lung disease. The proposed method with mass preserving adjustment results in significantly better alignment of the vessel trees. Analysis of local volume change for regions with trapped air compared to normally ventilated regions revealed larger differences between these regions in the case of mass preserving image registration, indicating that mass preserving registration is better at capturing localized differences in lung deformation.
NASA Astrophysics Data System (ADS)
Sadovnikov, A. V.; Odintsov, S. A.; Beginin, E. N.; Sheshukova, S. E.; Sharaevskii, Yu. P.; Nikitov, S. A.
2017-10-01
We demonstrate that the nonlinear spin-wave transport in two laterally parallel magnetic stripes exhibit the intensity-dependent power exchange between the adjacent spin-wave channels. By the means of Brillouin light scattering technique, we investigate collective nonlinear spin-wave dynamics in the presence of magnetodipolar coupling. The nonlinear intensity-dependent effect reveals itself in the spin-wave mode transformation and differential nonlinear spin-wave phase shift in each adjacent magnetic stripe. The proposed analytical theory, based on the coupled Ginzburg-Landau equations, predicts the geometry design involving the reduction of power requirement to the all-magnonic switching. A very good agreement between calculation and experiment was found. In addition, a micromagnetic and finite-element approach has been independently used to study the nonlinear behavior of spin waves in adjacent stripes and the nonlinear transformation of spatial profiles of spin-wave modes. Our results show that the proposed spin-wave coupling mechanism provides the basis for nonlinear magnonic circuits and opens the perspectives for all-magnonic computing architecture.
First-principles study of excitonic effects in Raman intensities
NASA Astrophysics Data System (ADS)
Gillet, Yannick; Giantomassi, Matteo; Gonze, Xavier
2013-09-01
The ab initio prediction of Raman intensities for bulk solids usually relies on the hypothesis that the frequency of the incident laser light is much smaller than the band gap. However, when the photon frequency is a sizable fraction of the energy gap, or higher, resonance effects appear. In the case of silicon, when excitonic effects are neglected, the response of the solid to light increases by nearly three orders of magnitude in the range of frequencies between the static limit and the gap. When excitonic effects are taken into account, an additional tenfold increase in the intensity is observed. We include these effects using a finite-difference scheme applied on the dielectric function obtained by solving the Bethe-Salpeter equation. Our results for the Raman susceptibility of silicon show stronger agreement with experimental data compared with previous theoretical studies. For the sampling of the Brillouin zone, a double-grid technique is proposed, resulting in a significant reduction in computational effort.
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
High performance computing and communications: Advancing the frontiers of information technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental inmore » the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.« less
Landmark Detection in Orbital Images Using Salience Histograms
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Panetta, Julian; Schorghofer, Norbert; Greeley, Ronald; PendletonHoffer, Mary; bunte, Melissa
2010-01-01
NASA's planetary missions have collected, and continue to collect, massive volumes of orbital imagery. The volume is such that it is difficult to manually review all of the data and determine its significance. As a result, images are indexed and searchable by location and date but generally not by their content. A new automated method analyzes images and identifies "landmarks," or visually salient features such as gullies, craters, dust devil tracks, and the like. This technique uses a statistical measure of salience derived from information theory, so it is not associated with any specific landmark type. It identifies regions that are unusual or that stand out from their surroundings, so the resulting landmarks are context-sensitive areas that can be used to recognize the same area when it is encountered again. A machine learning classifier is used to identify the type of each discovered landmark. Using a specified window size, an intensity histogram is computed for each such window within the larger image (sliding the window across the image). Next, a salience map is computed that specifies, for each pixel, the salience of the window centered at that pixel. The salience map is thresholded to identify landmark contours (polygons) using the upper quartile of salience values. Descriptive attributes are extracted for each landmark polygon: size, perimeter, mean intensity, standard deviation of intensity, and shape features derived from an ellipse fit.
Research on fatigue cracking growth parameters in asphaltic mixtures using computed tomography
NASA Astrophysics Data System (ADS)
Braz, D.; Lopes, R. T.; Motta, L. M. G.
2004-01-01
Distress of asphalt concrete pavement due to repeated bending from traffic loads has been a well-recognized problem in Brazil. If it is assumed that fatigue cracking growth is governed by the conditions at the crack tip, and that the crack tip conditions can be characterized by the stress intensity factor, then fatigue cracking growth as a function of stress intensity range Δ K can be determined. Computed tomography technique is used to detect crack evolution in asphaltic mixtures which were submitted to fatigue tests. Fatigue tests under conditions of controlled stress were carried out using diametral compression equipment and repeat loading. The aim of this work is imaging several specimens at different stages of the fatigue tests. In preliminary studies it was noted that the trajectory of a crack was influenced by the existence of voids in the originally unloaded specimens. Cracks would first be observed in the central region of a specimen, propagating in the direction of the extremities. Analyzing the graphics, that represent the fatigue cracking growth (d c/d N) as a function of stress intensity factor (Δ K), it is noticed that the curve has practically shown the same behavior for all specimens at the same level of the static tension rupture stress. The experimental values obtained for the constants A and n (of the Paris-Erdogan Law) present good agreement with the results obtained by Liang and Zhou.
New Approaches For Asteroid Spin State and Shape Modeling From Delay-Doppler Radar Images
NASA Astrophysics Data System (ADS)
Raissi, Chedy; Lamee, Mehdi; Mosiane, Olorato; Vassallo, Corinne; Busch, Michael W.; Greenberg, Adam; Benner, Lance A. M.; Naidu, Shantanu P.; Duong, Nicholas
2016-10-01
Delay-Doppler radar imaging is a powerful technique to characterize the trajectories, shapes, and spin states of near-Earth asteroids; and has yielded detailed models of dozens of objects. Reconstructing objects' shapes and spins from delay-Doppler data is a computationally intensive inversion problem. Since the 1990s, delay-Doppler data has been analyzed using the SHAPE software. SHAPE performs sequential single-parameter fitting, and requires considerable computer runtime and human intervention (Hudson 1993, Magri et al. 2007). Recently, multiple-parameter fitting algorithms have been shown to more efficiently invert delay-Doppler datasets (Greenberg & Margot 2015) - decreasing runtime while improving accuracy. However, extensive human oversight of the shape modeling process is still required. We have explored two new techniques to better automate delay-Doppler shape modeling: Bayesian optimization and a machine-learning neural network.One of the most time-intensive steps of the shape modeling process is to perform a grid search to constrain the target's spin state. We have implemented a Bayesian optimization routine that uses SHAPE to autonomously search the space of spin-state parameters. To test the efficacy of this technique, we compared it to results with human-guided SHAPE for asteroids 1992 UY4, 2000 RS11, and 2008 EV5. Bayesian optimization yielded similar spin state constraints within a factor of 3 less computer runtime.The shape modeling process could be further accelerated using a deep neural network to replace iterative fitting. We have implemented a neural network with a variational autoencoder (VAE), using a subset of known asteroid shapes and a large set of synthetic radar images as inputs to train the network. Conditioning the VAE in this manner allows the user to give the network a set of radar images and get a 3D shape model as an output. Additional development will be required to train a network to reliably render shapes from delay-Doppler images.This work was supported by NASA Ames, NVIDIA, Autodesk and the SETI Institute as part of the NASA Frontier Development Lab program.
2014-07-17
frequency-shifted shearing interferometry technique for probing pre-plasma expansion in ultra-intense laser experimentsa) Ultra-intense laser -matter...interaction experiments (>1018 W/cm2) with dense targets are highly sensitive to the effect of laser “noise” (in the form of pre-pulses) preceding the...interferometry technique for probing pre- plasma expansion in ultra-intense laser experimentsa) Report Title Ultra-intense laser -matter interaction
Bayesian component separation: The Planck experience
NASA Astrophysics Data System (ADS)
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Digital image classification approach for estimating forest clearing and regrowth rates and trends
NASA Technical Reports Server (NTRS)
Sader, Steven A.
1987-01-01
A technique is presented to monitor vegetation changes for a selected study area in Costa Rica. A normalized difference vegetation index was computed for three dates of Landsat satellite data and a modified parallelipiped classifier was employed to generate a multitemporal greenness image representing all three dates. A second-generation image was created by partitioning the intensity levels at each date into high, medium, and low and thereby reducing the number of classes to 21. A sampling technique was applied to describe forest and other land cover change occurring between time periods based on interpretation of aerial photography that closely matched the dates of satellite acquisition. Comparison of the Landsat-derived classes with the photo-interpreted sample areas can provide a basis for evaluating the satellite monitoring technique and the accuracy of estimating forest clearing and regrowth rates and trends.
Near Real-Time Imaging of the Galactic Plane with BATSE
NASA Technical Reports Server (NTRS)
Harmon, B. A.; Zhang, S. N.; Robinson, C. R.; Paciesas, W. S.; Barret, D.; Grindlay, J.; Bloser, P.; Monnelly, C.
1997-01-01
The discovery of new transient or persistent sources in the hard X-ray regime with the BATSE Earth occultation Technique has been limited previously to bright sources of about 200 mCrab or more. While monitoring known source locations is not a problem to a daily limiting sensitivity of about 75 mCrab, the lack of a reliable background model forces us to use more intensive computer techniques to find weak, previously unknown emission from hard X-ray/gamma sources. The combination of Radon transform imaging of the galactic plane in 10 by 10 degree fields and the Harvard/CFA-developed Image Search (CBIS) allows us to straightforwardly search the sky for candidate sources in a +/- 20 degree latitude band along the plane. This procedure has been operating routinely on a weekly basis since spring 1997. We briefly describe the procedure, then concentrate on the performance aspects of the technique and candidate source results from the search.
FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation
NASA Astrophysics Data System (ADS)
Veltri, M.
2016-09-01
This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.
Machine-Learning Techniques Applied to Antibacterial Drug Discovery
Durrant, Jacob D.; Amaro, Rommie E.
2014-01-01
The emergence of drug-resistant bacteria threatens to catapult humanity back to the pre-antibiotic era. Even now, multi-drug-resistant bacterial infections annually result in millions of hospital days, billions in healthcare costs, and, most importantly, tens of thousands of lives lost. As many pharmaceutical companies have abandoned antibiotic development in search of more lucrative therapeutics, academic researchers are uniquely positioned to fill the resulting vacuum. Traditional high-throughput screens and lead-optimization efforts are expensive and labor intensive. Computer-aided drug discovery techniques, which are cheaper and faster, can accelerate the identification of novel antibiotics in an academic setting, leading to improved hit rates and faster transitions to pre-clinical and clinical testing. The current review describes two machine-learning techniques, neural networks and decision trees, that have been used to identify experimentally validated antibiotics. We conclude by describing the future directions of this exciting field. PMID:25521642
Camera-enabled techniques for organic synthesis
Ingham, Richard J; O’Brien, Matthew; Browne, Duncan L
2013-01-01
Summary A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future. PMID:23766820
Advanced processing for high-bandwidth sensor systems
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.
2000-11-01
Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.
Data intensive computing at Sandia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Andrew T.
2010-09-01
Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.
CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction
Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.
2012-01-01
Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638
Muhogora, Wilbroad E; Msaki, Peter; Padovani, Renato
2015-03-08
The objective of this study was to improve the visibility of anatomical details by applying off-line postimage processing in chest computed radiography (CR). Four spatial domain-based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann-Whitney U-test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005 ≤ p ≤ 0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60 ≤ kVp ≤ 70. However, there was no improvement for images acquired using 102 ≤ kVp ≤ 107 (0.127 ≤ p ≤ 0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists.
Msaki, Peter; Padovani, Renato
2015-01-01
The objective of this study was to improve the visibility of anatomical details by applying off‐line postimage processing in chest computed radiography (CR). Four spatial domain‐based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann‐Whitney U‐test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005≤p≤0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60≤kVp≤70. However, there was no improvement for images acquired using 102≤kVp≤107 (0.127≤p≤0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists. PACS number: 87.59.−e, 87.59.−B, 87.59.−bd PMID:26103165
NASA Astrophysics Data System (ADS)
Zlotnik, Sergio
2017-04-01
Information provided by visualisation environments can be largely increased if the data shown is combined with some relevant physical processes and the used is allowed to interact with those processes. This is particularly interesting in VR environments where the user has a deep interplay with the data. For example, a geological seismic line in a 3D "cave" shows information of the geological structure of the subsoil. The available information could be enhanced with the thermal state of the region under study, with water-flow patterns in porous rocks or with rock displacements under some stress conditions. The information added by the physical processes is usually the output of some numerical technique applied to solve a Partial Differential Equation (PDE) that describes the underlying physics. Many techniques are available to obtain numerical solutions of PDE (e.g. Finite Elements, Finite Volumes, Finite Differences, etc). Although, all these traditional techniques require very large computational resources (particularly in 3D), making them useless in a real time visualization environment -such as VR- because the time required to compute a solution is measured in minutes or even in hours. We present here a novel alternative for the resolution of PDE-based problems that is able to provide a 3D solutions for a very large family of problems in real time. That is, the solution is evaluated in a one thousands of a second, making the solver ideal to be embedded into VR environments. Based on Model Order Reduction ideas, the proposed technique divides the computational work in to a computationally intensive "offline" phase, that is run only once in a life time, and an "online" phase that allow the real time evaluation of any solution within a family of problems. Preliminary examples of real time solutions of complex PDE-based problems will be presented, including thermal problems, flow problems, wave problems and some simple coupled problems.
Integration of scheduling and discrete event simulation systems to improve production flow planning
NASA Astrophysics Data System (ADS)
Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.
2016-08-01
The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.
Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis
LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK
2017-01-01
Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip
2015-08-08
Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less
Identification of lithium hydride and its hydrolysis products with neutron imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garlea, Elena; King, Martin O.; Galloway, E. C.
In this study, lithium hydride (LiH) and its hydrolysis products were investigated non-destructively with neutron radiography and neutron computed tomography. Relative neutron transmission intensities (I/I 0) were measured for LiOH, Li 2O and LiH, and their linear attenuation coefficients calculated from this data. We show that 7Li is necessary for creating large differences in I/I 0 for facile identification of these compounds. The thermal decomposition of LiOH to Li 2O was also observed with neutron radiography. Computed tomography shows that the samples were fairly homogeneous, with very few macroscopic defects. Lastly, the results shown here demonstrate the feasibility of observingmore » LiH hydrolysis with neutron imaging techniques in real time.« less
Optical systolic solutions of linear algebraic equations
NASA Technical Reports Server (NTRS)
Neuman, C. P.; Casasent, D.
1984-01-01
The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.
Identification of lithium hydride and its hydrolysis products with neutron imaging
Garlea, Elena; King, Martin O.; Galloway, E. C.; ...
2016-12-24
In this study, lithium hydride (LiH) and its hydrolysis products were investigated non-destructively with neutron radiography and neutron computed tomography. Relative neutron transmission intensities (I/I 0) were measured for LiOH, Li 2O and LiH, and their linear attenuation coefficients calculated from this data. We show that 7Li is necessary for creating large differences in I/I 0 for facile identification of these compounds. The thermal decomposition of LiOH to Li 2O was also observed with neutron radiography. Computed tomography shows that the samples were fairly homogeneous, with very few macroscopic defects. Lastly, the results shown here demonstrate the feasibility of observingmore » LiH hydrolysis with neutron imaging techniques in real time.« less
NASA Technical Reports Server (NTRS)
Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt
1991-01-01
The USDA presently uses labor-intensive photographic interpretation procedures to delineate large geographical areas into manageable size sampling units for the estimation of domestic crop and livestock production. Computer software to automate the boundary delineation procedure, called the computer-assisted stratification and sampling (CASS) system, was developed using a Hewlett Packard color-graphics workstation. The CASS procedures display Thematic Mapper (TM) satellite digital imagery on a graphics display workstation as the backdrop for the onscreen delineation of sampling units. USGS Digital Line Graph (DLG) data for roads and waterways are displayed over the TM imagery to aid in identifying potential sample unit boundaries. Initial analysis conducted with three Missouri counties indicated that CASS was six times faster than the manual techniques in delineating sampling units.
Recent developments in structural proteomics for protein structure determination.
Liu, Hsuan-Liang; Hsu, Jyh-Ping
2005-05-01
The major challenges in structural proteomics include identifying all the proteins on the genome-wide scale, determining their structure-function relationships, and outlining the precise three-dimensional structures of the proteins. Protein structures are typically determined by experimental approaches such as X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. However, the knowledge of three-dimensional space by these techniques is still limited. Thus, computational methods such as comparative and de novo approaches and molecular dynamic simulations are intensively used as alternative tools to predict the three-dimensional structures and dynamic behavior of proteins. This review summarizes recent developments in structural proteomics for protein structure determination; including instrumental methods such as X-ray crystallography and NMR spectroscopy, and computational methods such as comparative and de novo structure prediction and molecular dynamics simulations.
Reclaimed mineland curve number response to temporal distribution of rainfall
Warner, R.C.; Agouridis, C.T.; Vingralek, P.T.; Fogle, A.W.
2010-01-01
The curve number (CN) method is a common technique to estimate runoff volume, and it is widely used in coal mining operations such as those in the Appalachian region of Kentucky. However, very little CN data are available for watersheds disturbed by surface mining and then reclaimed using traditional techniques. Furthermore, as the CN method does not readily account for variations in infiltration rates due to varying rainfall distributions, the selection of a single CN value to encompass all temporal rainfall distributions could lead engineers to substantially under- or over-size water detention structures used in mining operations or other land uses such as development. Using rainfall and runoff data from a surface coal mine located in the Cumberland Plateau of eastern Kentucky, CNs were computed for conventionally reclaimed lands. The effects of temporal rainfall distributions on CNs was also examined by classifying storms as intense, steady, multi-interval intense, or multi-interval steady. Results indicate that CNs for such reclaimed lands ranged from 62 to 94 with a mean value of 85. Temporal rainfall distributions were also shown to significantly affect CN values with intense storms having significantly higher CNs than multi-interval storms. These results indicate that a period of recovery is present between rainfall bursts of a multi-interval storm that allows depressional storage and infiltration rates to rebound. ?? 2010 American Water Resources Association.
CCD high-speed videography system with new concepts and techniques
NASA Astrophysics Data System (ADS)
Zheng, Zengrong; Zhao, Wenyi; Wu, Zhiqiang
1997-05-01
A novel CCD high speed videography system with brand-new concepts and techniques is developed by Zhejiang University recently. The system can send a series of short flash pulses to the moving object. All of the parameters, such as flash numbers, flash durations, flash intervals, flash intensities and flash colors, can be controlled according to needs by the computer. A series of moving object images frozen by flash pulses, carried information of moving object, are recorded by a CCD video camera, and result images are sent to a computer to be frozen, recognized and processed with special hardware and software. Obtained parameters can be displayed, output as remote controlling signals or written into CD. The highest videography frequency is 30,000 images per second. The shortest image freezing time is several microseconds. The system has been applied to wide fields of energy, chemistry, medicine, biological engineering, aero- dynamics, explosion, multi-phase flow, mechanics, vibration, athletic training, weapon development and national defense engineering. It can also be used in production streamline to carry out the online, real-time monitoring and controlling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adur, Rohan, E-mail: adur@physics.osu.edu; Du, Chunhui; Manuilov, Sergei A.
2015-05-07
The dipole field from a probe magnet can be used to localize a discrete spectrum of standing spin wave modes in a continuous ferromagnetic thin film without lithographic modification to the film. Obtaining the resonance field for a localized mode is not trivial due to the effect of the confined and inhomogeneous magnetization precession. We compare the results of micromagnetic and analytic methods to find the resonance field of localized modes in a ferromagnetic thin film, and investigate the accuracy of these methods by comparing with a numerical minimization technique that assumes Bessel function modes with pinned boundary conditions. Wemore » find that the micromagnetic technique, while computationally more intensive, reveals that the true magnetization profiles of localized modes are similar to Bessel functions with gradually decaying dynamic magnetization at the mode edges. We also find that an analytic solution, which is simple to implement and computationally much faster than other methods, accurately describes the resonance field of localized modes when exchange fields are negligible, and demonstrating the accessibility of localized mode analysis.« less
Towards clinical computed ultrasound tomography in echo-mode: Dynamic range artefact reduction.
Jaeger, Michael; Frenz, Martin
2015-09-01
Computed ultrasound tomography in echo-mode (CUTE) allows imaging the speed of sound inside tissue using hand-held pulse-echo ultrasound. This technique is based on measuring the changing local phase of beamformed echoes when changing the transmit beam steering angle. Phantom results have shown a spatial resolution and contrast that could qualify CUTE as a promising novel diagnostic modality in combination with B-mode ultrasound. Unfortunately, the large intensity range of several tens of dB that is encountered in clinical images poses difficulties to echo phase tracking and results in severe artefacts. In this paper we propose a modification to the original technique by which more robust echo tracking can be achieved, and we demonstrate in phantom experiments that dynamic range artefacts are largely eliminated. Dynamic range artefact reduction also allowed for the first time a clinical implementation of CUTE with sufficient contrast to reproducibly distinguish the different speed of sound in different tissue layers of the abdominal wall and the neck. Copyright © 2015. Published by Elsevier B.V.
Studying Suspended Sediment Mechanism with Two-Phase PIV
NASA Astrophysics Data System (ADS)
Matinpour, H.; Atkinson, J. F.; Bennett, S. J.; Guala, M.
2017-12-01
Suspended sediment transport affects soil erosion, agriculture and water resources quality. Turbulent diffusion is the most primary force to maintain sediments in suspension. Although extensive previous literature have been studying the interactions between turbulent motion and suspended sediment, mechanism of sediments in suspension is still poorly understood. In this study, we investigate suspension of sediments as two distinct phases: one phase of sediments and another phase of fluid with turbulent motions. We designed and deployed a state-of-the-art two-phase PIV measurement technique to discriminate these two phases and acquire velocities of each phase separately and simultaneously. The technique that we have developed is employing a computer-vision based method, which enables us to discriminate sediment particles from fluid tracer particles based on two thresholds, dissimilar particle sizes and different particle intensities. Results indicate that fluid turbulence decreases in the presence of suspended sediments. Obtaining only sediment phase consecutive images enable us to compute fluctuation sediment concentration. This result enlightens understanding of complex interaction between the fluctuation velocities and the fluctuation of associated mass and compares turbulent viscosity with turbulent eddy diffusivity experimentally.
Hu, Yu-Chen
2018-01-01
The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%. PMID:29702607
NASA Astrophysics Data System (ADS)
Wang, Rui
It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.
Lee, Ki-Wook; Kim, Yeun; Perinpanayagam, Hiran; Lee, Jong-Ki; Yoo, Yeon-Jee; Lim, Sang-Min; Chang, Seok Woo; Ha, Byung-Hyun; Zhu, Qiang; Kum, Kee-Yeon
2014-03-01
Micro-computed tomography (MCT) shows detailed root canal morphology that is not seen with traditional tooth clearing. However, alternative image reformatting techniques in MCT involving 2-dimensional (2D) minimum intensity projection (MinIP) and 3-dimensional (3D) volume-rendering reconstruction have not been directly compared with clearing. The aim was to compare alternative image reformatting techniques in MCT with tooth clearing on the mesiobuccal (MB) root of maxillary first molars. Eighteen maxillary first molar MB roots were scanned, and 2D MinIP and 3D volume-rendered images were reconstructed. Subsequently, the same MB roots were processed by traditional tooth clearing. Images from 2D, 3D, 2D + 3D, and clearing techniques were assessed by 4 endodontists to classify canal configuration and to identify fine anatomic structures such as accessory canals, intercanal communications, and loops. All image reformatting techniques in MCT showed detailed configurations and numerous fine structures, such that none were classified as simple type I or II canals; several were classified as types III and IV according to Weine classification or types IV, V, and VI according to Vertucci; and most were nonclassifiable because of their complexity. The clearing images showed less detail, few fine structures, and numerous type I canals. Classification of canal configuration was in 100% intraobserver agreement for all 18 roots visualized by any of the image reformatting techniques in MCT but for only 4 roots (22.2%) classified according to Weine and 6 (33.3%) classified according to Vertucci, when using the clearing technique. The combination of 2D MinIP and 3D volume-rendered images showed the most detailed canal morphology and fine anatomic structures. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.
Das, Biman; Drake, Eli; Jack, John
2004-02-01
Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.
Fast global image smoothing based on weighted least squares.
Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N
2014-12-01
This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.
Implementation issues of the nearfield equivalent source imaging microphone array
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen
2011-01-01
This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.
Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging
NASA Astrophysics Data System (ADS)
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-04-01
The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.
Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-01-01
The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402
A Novel Approach for Reducing Rotor Tip-Clearance Induced Noise in Turbofan Engines
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Li, Fei; Choudhari, Meelan
2001-01-01
Rotor tip-clearance induced noise, both in the form of rotor self noise and rotor-stator interaction noise , constitutes a significant component of total fan noise. Innovative yet cost effective techniques to suppress rotor-generated noise are, therefore, of foremost importance for improving the noise signature of turbofan engines. To that end, the feasibility of a passive porous treatment strategy to positively modify the tip-clearance flow field is addressed. The present study is focused on accurate viscous flow calculations of the baseline and the treated rotor flow fields. Detailed comparison between the computed baseline solution and experimental measurements shows excellent agreement. Tip-vortex structure, trajectory, strength, and other relevant aerodynamic quantities are extracted from the computed database. Extensive comparison between the untreated and treated tip-clearance flow fields is performed. The effectiveness of the porous treatment for altering the rotor-tip vortex flow field in general and reducing the intensity of the tip vortex, in particular, is demonstrated. In addition, the simulated flow field for the treated tip clearly shows that substantial reduction in the intensity of both the shear layer roll-up and boundary layer separation on the wall is achieved.
Application of ERTS-1 data to the protection and management of New Jersey's coastal environment
NASA Technical Reports Server (NTRS)
Yunghans, R. S.; Feinberg, E. B.; Wobber, F. J.; Mairs, R. L. (Principal Investigator); Macomber, R. T.; Stanczuk, D.; Stitt, J. A.
1974-01-01
The author has identified the following significant results. Rapid access to ERTS data was provided by NASA GSFC for the February 26, 1974 overpass of the New Jersey test site. Forty-seven hours following the overpass computer-compatible tapes were ready for processing at EarthSat. The finished product was ready just 60 hours following the overpass and delivered to the New Jersey Department of Environmental Protection. This operational demonstration has been successful in convincing NJDEP as to the worth of ERTS as an operational monitoring and enforcement tool of significant value to the State. An erosion/ accretion severity index has been developed for the New Jersey shore case study area. Computerized analysis techniques have been used for monitoring offshore waste disposal dumping locations, drift vectors, and dispersion rates in the New York Bight area. A computer shade print of the area was used to identify intensity levels of acid waste. A Litton intensity slice print was made to provide graphic presentation of dispersion characteristics and the dump extent. Continued monitoring will lead to the recommendation and justification of permanent dumping sites which pose no threat to water quality in nearshore environments.
A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows
NASA Technical Reports Server (NTRS)
Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert
1996-01-01
The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets
2010-01-01
Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262
An Efficient VLSI Architecture of the Enhanced Three Step Search Algorithm
NASA Astrophysics Data System (ADS)
Biswas, Baishik; Mukherjee, Rohan; Saha, Priyabrata; Chakrabarti, Indrajit
2016-09-01
The intense computational complexity of any video codec is largely due to the motion estimation unit. The Enhanced Three Step Search is a popular technique that can be adopted for fast motion estimation. This paper proposes a novel VLSI architecture for the implementation of the Enhanced Three Step Search Technique. A new addressing mechanism has been introduced which enhances the speed of operation and reduces the area requirements. The proposed architecture when implemented in Verilog HDL on Virtex-5 Technology and synthesized using Xilinx ISE Design Suite 14.1 achieves a critical path delay of 4.8 ns while the area comes out to be 2.9K gate equivalent. It can be incorporated in commercial devices like smart-phones, camcorders, video conferencing systems etc.
Assessment and Validation of Machine Learning Methods for Predicting Molecular Atomization Energies.
Hansen, Katja; Montavon, Grégoire; Biegler, Franziska; Fazli, Siamac; Rupp, Matthias; Scheffler, Matthias; von Lilienfeld, O Anatole; Tkatchenko, Alexandre; Müller, Klaus-Robert
2013-08-13
The accurate and reliable prediction of properties of molecules typically requires computationally intensive quantum-chemical calculations. Recently, machine learning techniques applied to ab initio calculations have been proposed as an efficient approach for describing the energies of molecules in their given ground-state structure throughout chemical compound space (Rupp et al. Phys. Rev. Lett. 2012, 108, 058301). In this paper we outline a number of established machine learning techniques and investigate the influence of the molecular representation on the methods performance. The best methods achieve prediction errors of 3 kcal/mol for the atomization energies of a wide variety of molecules. Rationales for this performance improvement are given together with pitfalls and challenges when applying machine learning approaches to the prediction of quantum-mechanical observables.
NASA Astrophysics Data System (ADS)
Ondogan, Ziynet; Pamuk, Oktay; Ondogan, Ece Nuket; Ozguney, Arif
2005-11-01
Denim trousers, commonly known as "blue jeans", have maintained their popularity for many years. For the purpose of supporting customers' purchasing behaviour and to address their aesthetic taste, companies have been trying in recent years to develop various techniques to improve the visual aspects of denim fabrics. These techniques mainly include printing on fabrics, embroidery and washing the final product. Especially, fraying certain areas of the fabric by sanding and stone washing to create designs is a popular technique. However, due to certain inconveniences caused by these procedures and in response to growing demands, research is underway to obtain a similar appearance by creating better quality and more advantageous manufacturing conditions. As is known, the laser is a source of energy which can be directed on desired objects and whose power and intensity can be easily controlled. Use of the laser enables us to cut a great variety of material from metal to fabric. Starting off from this point, we thought it would be possible to transfer certain designs onto the surface of textile material by changing the dye molecules in the fabric and creating alterations in its colour quality values by directing the laser to the material at reduced intensity. This study mainly deals with a machine specially designed for making use of laser beams to transfer pictures, figures as well as graphics of desired variety, size and intensity on all kinds of surfaces in textile manufacturing such as knitted—woven fabrics, leather, etc. at desired precision and without damaging the texture of the material. In the designed system, computer-controlled laser beams are used to change the colour of the dye material on the textile surface by directing the laser beams at a desired wavelength and intensity onto various textile surfaces selected for application. For this purpose, a laser beam source that can reach the initial level of power and that can be controlled by means of a computer interface; reflecting mirrors that can direct this beam at two axes; a galvanometer which comprised of an optical aperture; and a computer program that can transfer images obtained in standard formats to the galvanometer control card were used. Developing new designs by using the computer and transferring the designs that are obtained on textile surfaces will not only increase and facilitate the production in a more practical manner, but also help you to create identical designs. This means serial manufacturing of the products at a standard quality and increasing their added values. Moreover, creating textile designs using laser will also contribute to the value of the product as far as the consumer is concerned because it will not cause any wearing off and deformation in the texture of the fabric unlike the sanding and stoning processes. Another advantage of this system is that it gives a richer look to the product by causing the textile surfaces to get wrinkled and become three-dimensional by deformation as well as enabling you to create pictures and patterns on leather and synthetic fabrics by means of heat. As for the results of the study, the first step was to prepare 40 pairs of denim trousers, half of which were prepared manually and the other half by using laser beam. Time studies were made at every step of the production. So as to determine the abrasion degrees of the trousers in design applications, tensile strength as well as tensile extension tests were conducted for all the trousers.
Energy and Quality-Aware Multimedia Signal Processing
NASA Astrophysics Data System (ADS)
Emre, Yunus
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
A supportive architecture for CFD-based design optimisation
NASA Astrophysics Data System (ADS)
Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong
2014-03-01
Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.
LIMPIC: a computational method for the separation of protein MALDI-TOF-MS signals from noise.
Mantini, Dante; Petrucci, Francesca; Pieragostino, Damiana; Del Boccio, Piero; Di Nicola, Marta; Di Ilio, Carmine; Federici, Giorgio; Sacchetta, Paolo; Comani, Silvia; Urbani, Andrea
2007-03-26
Mass spectrometry protein profiling is a promising tool for biomarker discovery in clinical proteomics. However, the development of a reliable approach for the separation of protein signals from noise is required. In this paper, LIMPIC, a computational method for the detection of protein peaks from linear-mode MALDI-TOF data is proposed. LIMPIC is based on novel techniques for background noise reduction and baseline removal. Peak detection is performed considering the presence of a non-homogeneous noise level in the mass spectrum. A comparison of the peaks collected from multiple spectra is used to classify them on the basis of a detection rate parameter, and hence to separate the protein signals from other disturbances. LIMPIC preprocessing proves to be superior than other classical preprocessing techniques, allowing for a reliable decomposition of the background noise and the baseline drift from the MALDI-TOF mass spectra. It provides lower coefficient of variation associated with the peak intensity, improving the reliability of the information that can be extracted from single spectra. Our results show that LIMPIC peak-picking is effective even in low protein concentration regimes. The analytical comparison with commercial and freeware peak-picking algorithms demonstrates its superior performances in terms of sensitivity and specificity, both on in-vitro purified protein samples and human plasma samples. The quantitative information on the peak intensity extracted with LIMPIC could be used for the recognition of significant protein profiles by means of advanced statistic tools: LIMPIC might be valuable in the perspective of biomarker discovery.
NASA Astrophysics Data System (ADS)
Vilotte, J.-P.; Atkinson, M.; Michelini, A.; Igel, H.; van Eck, T.
2012-04-01
Increasingly dense seismic and geodetic networks are continuously transmitting a growing wealth of data from around the world. The multi-use of these data leaded the seismological community to pioneer globally distributed open-access data infrastructures, standard services and formats, e.g., the Federation of Digital Seismic Networks (FDSN) and the European Integrated Data Archives (EIDA). Our ability to acquire observational data outpaces our ability to manage, analyze and model them. Research in seismology is today facing a fundamental paradigm shift. Enabling advanced data-intensive analysis and modeling applications challenges conventional storage, computation and communication models and requires a new holistic approach. It is instrumental to exploit the cornucopia of data, and to guarantee optimal operation and design of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of the seismological data-intensive applications in data analysis and modeling. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of those applications, and integrating the data infrastructures with Grid, Cloud and HPC infrastructures. It will allow prototyping solutions for new use cases as they emerge within the European Plate Observatory Systems (EPOS), the ESFRI initiative of the solid Earth community. Computational seismology, and information management, is increasingly revolving around massive amounts of data that stem from: (1) the flood of data from the observational systems; (2) the flood of data from large-scale simulations and inversions; (3) the ability to economically store petabytes of data online; (4) the evolving Internet and Data-aware computing capabilities. As data-intensive applications are rapidly increasing in scale and complexity, they require additional services-oriented architectures offering a virtualization-based flexibility for complex and re-usable workflows. Scientific information management poses computer science challenges: acquisition, organization, query and visualization tasks scale almost linearly with the data volumes. Commonly used FTP-GREP metaphor allows today to scan gigabyte-sized datasets but will not work for scanning terabyte-sized continuous waveform datasets. New data analysis and modeling methods, exploiting the signal coherence within dense network arrays, are nonlinear. Pair-algorithms on N points scale as N2. Wave form inversion and stochastic simulations raise computing and data handling challenges These applications are unfeasible for tera-scale datasets without new parallel algorithms that use near-linear processing, storage and bandwidth, and that can exploit new computing paradigms enabled by the intersection of several technologies (HPC, parallel scalable database crawler, data-aware HPC). This issues will be discussed based on a number of core pilot data-intensive applications and use cases retained in VERCE. This core applications are related to: (1) data processing and data analysis methods based on correlation techniques; (2) cpu-intensive applications such as large-scale simulation of synthetic waveforms in complex earth systems, and full waveform inversion and tomography. We shall analyze their workflow and data flow, and their requirements for a new service-oriented architecture and a data-aware platform with services and tools. Finally, we will outline the importance of a new collaborative environment between seismology and computer science, together with the need for the emergence and the recognition of 'research technologists' mastering the evolving data-aware technologies and the data-intensive research goals in seismology.
Presumed PDF Modeling of Early Flame Propagation in Moderate to Intense Turbulence Environments
NASA Technical Reports Server (NTRS)
Carmen, Christina; Feikema, Douglas A.
2003-01-01
The present paper describes the results obtained from a one-dimensional time dependent numerical technique that simulates early flame propagation in a moderate to intense turbulent environment. Attention is focused on the development of a spark-ignited, premixed, lean methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. A Monte-Carlo particle tracking method, based upon the method of fractional steps, is utilized to simulate the phenomena represented by a probability density function (PDF) transport equation. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on three primary parameters that influence the initial flame kernel growth: the detailed ignition system characteristics, the mixture composition, and the nature of the flow field. The computational results of moderate and intense isotropic turbulence suggests that flames within the distributed reaction zone are not as vulnerable, as traditionally believed, to the adverse effects of increased turbulence intensity. It is also shown that the magnitude of the flame front thickness significantly impacts the turbulent consumption flame speed. Flame conditions studied have fuel equivalence ratio s in the range phi = 0.6 to 0.9 at standard temperature and pressure.
Exploring the Earth Using Deep Learning Techniques
NASA Astrophysics Data System (ADS)
Larraondo, P. R.; Evans, B. J. K.; Antony, J.
2016-12-01
Research using deep neural networks have significantly matured in recent times, and there is now a surge in interest to apply such methods to Earth systems science and the geosciences. When combined with Big Data, we believe there are opportunities for significantly transforming a number of areas relevant to researchers and policy makers. In particular, by using a combination of data from a range of satellite Earth observations as well as computer simulations from climate models and reanalysis, we can gain new insights into the information that is locked within the data. Global geospatial datasets describe a wide range of physical and chemical parameters, which are mostly available using regular grids covering large spatial and temporal extents. This makes them perfect candidates to apply deep learning methods. So far, these techniques have been successfully applied to image analysis through the use of convolutional neural networks. However, this is only one field of interest, and there is potential for many more use cases to be explored. The deep learning algorithms require fast access to large amounts of data in the form of tensors and make intensive use of CPU in order to train its models. The Australian National Computational Infrastructure (NCI) has recently augmented its Raijin 1.2 PFlop supercomputer with hardware accelerators. Together with NCI's 3000 core high performance OpenStack cloud, these computational systems have direct access to NCI's 10+ PBytes of datasets and associated Big Data software technologies (see http://geonetwork.nci.org.au/ and http://nci.org.au/systems-services/national-facility/nerdip/). To effectively use these computing infrastructures requires that both the data and software are organised in a way that readily supports the deep learning software ecosystem. Deep learning software, such as the open source TensorFlow library, has allowed us to demonstrate the possibility of generating geospatial models by combining information from our different data sources. This opens the door to an exciting new way of generating products and extracting features that have previously been labour intensive. In this paper, we will explore some of these geospatial use cases and share some of the lessons learned from this experience.
Flow visualization of CFD using graphics workstations
NASA Technical Reports Server (NTRS)
Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon
1987-01-01
High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.
Fermilab computing at the Intensity Frontier
Group, Craig; Fuess, S.; Gutsche, O.; ...
2015-12-23
The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
NASA Astrophysics Data System (ADS)
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
Iterative reconstruction of volumetric particle distribution
NASA Astrophysics Data System (ADS)
Wieneke, Bernhard
2013-02-01
For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.
Blob-enhanced reconstruction technique
NASA Astrophysics Data System (ADS)
Castrillo, Giusy; Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso
2016-09-01
A method to enhance the quality of the tomographic reconstruction and, consequently, the 3D velocity measurement accuracy, is presented. The technique is based on integrating information on the objects to be reconstructed within the algebraic reconstruction process. A first guess intensity distribution is produced with a standard algebraic method, then the distribution is rebuilt as a sum of Gaussian blobs, based on location, intensity and size of agglomerates of light intensity surrounding local maxima. The blobs substitution regularizes the particle shape allowing a reduction of the particles discretization errors and of their elongation in the depth direction. The performances of the blob-enhanced reconstruction technique (BERT) are assessed with a 3D synthetic experiment. The results have been compared with those obtained by applying the standard camera simultaneous multiplicative reconstruction technique (CSMART) to the same volume. Several blob-enhanced reconstruction processes, both substituting the blobs at the end of the CSMART algorithm and during the iterations (i.e. using the blob-enhanced reconstruction as predictor for the following iterations), have been tested. The results confirm the enhancement in the velocity measurements accuracy, demonstrating a reduction of the bias error due to the ghost particles. The improvement is more remarkable at the largest tested seeding densities. Additionally, using the blobs distributions as a predictor enables further improvement of the convergence of the reconstruction algorithm, with the improvement being more considerable when substituting the blobs more than once during the process. The BERT process is also applied to multi resolution (MR) CSMART reconstructions, permitting simultaneously to achieve remarkable improvements in the flow field measurements and to benefit from the reduction in computational time due to the MR approach. Finally, BERT is also tested on experimental data, obtaining an increase of the signal-to-noise ratio in the reconstructed flow field and a higher value of the correlation factor in the velocity measurements with respect to the volume to which the particles are not replaced.
Power and Performance Trade-offs for Space Time Adaptive Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino
Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less
Zhan, X.
2005-01-01
A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.
Poitevin, Frédéric; Orland, Henri; Doniach, Sebastian; Koehl, Patrice; Delarue, Marc
2011-07-01
Small Angle X-ray Scattering (SAXS) techniques are becoming more and more useful for structural biologists and biochemists, thanks to better access to dedicated synchrotron beamlines, better detectors and the relative easiness of sample preparation. The ability to compute the theoretical SAXS profile of a given structural model, and to compare this profile with the measured scattering intensity, yields crucial structural informations about the macromolecule under study and/or its complexes in solution. An important contribution to the profile, besides the macromolecule itself and its solvent-excluded volume, is the excess density due to the hydration layer. AquaSAXS takes advantage of recently developed methods, such as AquaSol, that give the equilibrium solvent density map around macromolecules, to compute an accurate SAXS/WAXS profile of a given structure and to compare it to the experimental one. Here, we describe the interface architecture and capabilities of the AquaSAXS web server (http://lorentz.dynstr.pasteur.fr/aquasaxs.php).
Force fields and scoring functions for carbohydrate simulation.
Xiong, Xiuming; Chen, Zhaoqiang; Cossins, Benjamin P; Xu, Zhijian; Shao, Qiang; Ding, Kai; Zhu, Weiliang; Shi, Jiye
2015-01-12
Carbohydrate dynamics plays a vital role in many biological processes, but we are not currently able to probe this with experimental approaches. The highly flexible nature of carbohydrate structures differs in many aspects from other biomolecules, posing significant challenges for studies employing computational simulation. Over past decades, computational study of carbohydrates has been focused on the development of structure prediction methods, force field optimization, molecular dynamics simulation, and scoring functions for carbohydrate-protein interactions. Advances in carbohydrate force fields and scoring functions can be largely attributed to enhanced computational algorithms, application of quantum mechanics, and the increasing number of experimental structures determined by X-ray and NMR techniques. The conformational analysis of carbohydrates is challengeable and has gone into intensive study in elucidating the anomeric, the exo-anomeric, and the gauche effects. Here, we review the issues associated with carbohydrate force fields and scoring functions, which will have a broad application in the field of carbohydrate-based drug design. Copyright © 2014 Elsevier Ltd. All rights reserved.
ASTEC and MODEL: Controls software development at Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Downing, John P.; Bauer, Frank H.; Surber, Jeffrey L.
1993-01-01
The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at the Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. In the last three years the ASTEC (Analysis and Simulation Tools for Engineering Controls) software has been under development. ASTEC is meant to be an integrated collection of controls analysis tools for use at the desktop level. MODEL (Multi-Optimal Differential Equation Language) is a translator that converts programs written in the MODEL language to FORTRAN. An upgraded version of the MODEL program will be merged into ASTEC. MODEL has not been modified since 1981 and has not kept with changes in computers or user interface techniques. This paper describes the changes made to MODEL in order to make it useful in the 90's and how it relates to ASTEC.
NASA Astrophysics Data System (ADS)
Tiberi, Lara; Costa, Giovanni; Jamšek Rupnik, Petra; Cecić, Ina; Suhadolc, Peter
2018-05-01
The earthquake (Mw 6 from the SHEEC defined by the MDPs) that occurred in the central part of Slovenia on 14 April, 1895, affected a broad region, causing deaths, injuries, and destruction. This event was much studied but not fully explained; in particular, its causative source model is still debated. The aim of this work is to contribute to the identification of the seismogenic source of this destructive event, calculating peak ground velocity values through the use of different ground motion prediction equations (GMPEs) and computing a series of ground motion scenarios based on the result of an inversion work proposed by Jukić in 2009 and on various fault models in the surroundings of Ljubljana: Vič, Želimlje, Borovnica, Vodice, Ortnek, Mišjedolski, and Dobrepolje faults. The synthetic seismograms, at the basis of our computations, are calculated using the multi-modal summation technique and a kinematic approach for extended sources, with a maximum peak ground velocity value of 1 Hz. The qualitative and quantitative comparison of these simulations with the macroseismic intensity database allows us to discriminate between various sources and configurations. The quantitative validation of the seismic source is done using ad hoc ground motion to intensity conversion equations (GMICEs), expressly calculated for this study. This study allows us to identify the most probable causative source model of this event, contributing to the improvement of the seismotectonic knowledge of this region. The candidate fault that has the lowest values of average differences between observed and calculated intensities and chi-squared is a strike slip fault with a toward-north rupture as the Ortnek fault.
Intensity-based 2D 3D registration for lead localization in robot guided deep brain stimulation
NASA Astrophysics Data System (ADS)
Hunsche, Stefan; Sauner, Dieter; El Majdoub, Faycal; Neudorfer, Clemens; Poggenborg, Jörg; Goßmann, Axel; Maarouf, Mohammad
2017-03-01
Intraoperative assessment of lead localization has become a standard procedure during deep brain stimulation surgery in many centers, allowing immediate verification of targeting accuracy and, if necessary, adjustment of the trajectory. The most suitable imaging modality to determine lead positioning, however, remains controversially discussed. Current approaches entail the implementation of computed tomography and magnetic resonance imaging. In the present study, we adopted the technique of intensity-based 2D 3D registration that is commonly employed in stereotactic radiotherapy and spinal surgery. For this purpose, intraoperatively acquired 2D x-ray images were fused with preoperative 3D computed tomography (CT) data to verify lead placement during stereotactic robot assisted surgery. Accuracy of lead localization determined from 2D 3D registration was compared to conventional 3D 3D registration in a subsequent patient study. The mean Euclidian distance of lead coordinates estimated from intensity-based 2D 3D registration versus flat-panel detector CT 3D 3D registration was 0.7 mm ± 0.2 mm. Maximum values of these distances amounted to 1.2 mm. To further investigate 2D 3D registration a simulation study was conducted, challenging two observers to visually assess artificially generated 2D 3D registration errors. 95% of deviation simulations, which were visually assessed as sufficient, had a registration error below 0.7 mm. In conclusion, 2D 3D intensity-based registration revealed high accuracy and reliability during robot guided stereotactic neurosurgery and holds great potential as a low dose, cost effective means for intraoperative lead localization.
Kim, Yeun; Perinpanayagam, Hiran; Lee, Jong-Ki; Yoo, Yeon-Jee; Oh, Soram; Gu, Yu; Lee, Seung-Pyo; Chang, Seok Woo; Lee, Woocheol; Baek, Seung-Ho; Zhu, Qiang; Kum, Kee-Yeon
2015-08-01
Micro-computed tomography (MCT) with alternative image reformatting techniques shows complex and detailed root canal anatomy. This study compared two-dimensional (2D) and 3D MCT image reformatting with standard tooth clearing for studying mandibular first molar mesial root canal morphology. Extracted human mandibular first molar mesial roots (n=31) were scanned by MCT (Skyscan 1172). 2D thin-slab minimum intensity projection (TS-MinIP) and 3D volume rendered images were constructed. The same teeth were then processed by clearing and staining. For each root, images obtained from clearing, 2D, 3D and combined 2D and 3D techniques were examined independently by four endodontists and categorized according to Vertucci's classification. Fine anatomical structures such as accessory canals, intercanal communications and loops were also identified. Agreement among the four techniques for Vertucci's classification was 45.2% (14/31). The most frequent were Vertucci's type IV and then type II, although many had complex configurations that were non-classifiable. Generally, complex canal systems were more clearly visible in MCT images than with standard clearing and staining. Fine anatomical structures such as intercanal communications, accessory canals and loops were mostly detected with a combination of 2D TS-MinIP and 3D volume-rendering MCT images. Canal configurations and fine anatomic structures were more clearly observed in the combined 2D and 3D MCT images than the clearing technique. The frequency of non-classifiable configurations demonstrated the complexity of mandibular first molar mesial root canal anatomy.
Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo
NASA Astrophysics Data System (ADS)
Khosravi, Ebrahim
1998-12-01
This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.
A mixed-order nonlinear diffusion compressed sensing MR image reconstruction.
Joy, Ajin; Paul, Joseph Suresh
2018-03-07
Avoid formation of staircase artifacts in nonlinear diffusion-based MR image reconstruction without compromising computational speed. Whereas second-order diffusion encourages the evolution of pixel neighborhood with uniform intensities, fourth-order diffusion considers smooth region to be not necessarily a uniform intensity region but also a planar region. Therefore, a controlled application of fourth-order diffusivity function is used to encourage second-order diffusion to reconstruct the smooth regions of the image as a plane rather than a group of blocks, while not being strong enough to introduce the undesirable speckle effect. Proposed method is compared with second- and fourth-order nonlinear diffusion reconstruction, total variation (TV), total generalized variation, and higher degree TV using in vivo data sets for different undersampling levels with application to dictionary learning-based reconstruction. It is observed that the proposed technique preserves sharp boundaries in the image while preventing the formation of staircase artifacts in the regions of smoothly varying pixel intensities. It also shows reduced error measures compared with second-order nonlinear diffusion reconstruction or TV and converges faster than TV-based methods. Because nonlinear diffusion is known to be an effective alternative to TV for edge-preserving reconstruction, the crucial aspect of staircase artifact removal is addressed. Reconstruction is found to be stable for the experimentally determined range of fourth-order regularization parameter, and therefore not does not introduce a parameter search. Hence, the computational simplicity of second-order diffusion is retained. © 2018 International Society for Magnetic Resonance in Medicine.
Zhang, You; Ma, Jianhua; Iyengar, Puneeth; Zhong, Yuncheng; Wang, Jing
2017-01-01
Purpose Sequential same-patient CT images may involve deformation-induced and non-deformation-induced voxel intensity changes. An adaptive deformation recovery and intensity correction (ADRIC) technique was developed to improve the CT reconstruction accuracy, and to separate deformation from non-deformation-induced voxel intensity changes between sequential CT images. Materials and Methods ADRIC views the new CT volume as a deformation of a prior high-quality CT volume, but with additional non-deformation-induced voxel intensity changes. ADRIC first applies the 2D-3D deformation technique to recover the deformation field between the prior CT volume and the new, to-be-reconstructed CT volume. Using the deformation-recovered new CT volume, ADRIC further corrects the non-deformation-induced voxel intensity changes with an updated algebraic reconstruction technique (‘ART-dTV’). The resulting intensity-corrected new CT volume is subsequently fed back into the 2D-3D deformation process to further correct the residual deformation errors, which forms an iterative loop. By ADRIC, the deformation field and the non-deformation voxel intensity corrections are optimized separately and alternately to reconstruct the final CT. CT myocardial perfusion imaging scenarios were employed to evaluate the efficacy of ADRIC, using both simulated data of the extended-cardiac-torso (XCAT) digital phantom and experimentally acquired porcine data. The reconstruction accuracy of the ADRIC technique was compared to the technique using ART-dTV alone, and to the technique using 2D-3D deformation alone. The relative error metric and the universal quality index metric are calculated between the images for quantitative analysis. The relative error is defined as the square root of the sum of squared voxel intensity differences between the reconstructed volume and the ‘ground-truth’ volume, normalized by the square root of the sum of squared ‘ground-truth’ voxel intensities. In addition to the XCAT and porcine studies, a physical lung phantom measurement study was also conducted. Water-filled balloons with various shapes/volumes and concentrations of iodinated contrasts were put inside the phantom to simulate both deformations and non-deformation-induced intensity changes for ADRIC reconstruction. The ADRIC-solved deformations and intensity changes from limited-view projections were compared to those of the ‘gold-standard’ volumes reconstructed from fully-sampled projections. Results For the XCAT simulation study, the relative errors of the reconstructed CT volume by the 2D-3D deformation technique, the ART-dTV technique and the ADRIC technique were 14.64%, 19.21% and 11.90% respectively, by using 20 projections for reconstruction. Using 60 projections for reconstruction reduced the relative errors to 12.33%, 11.04% and 7.92% for the three techniques, respectively. For the porcine study, the corresponding results were 13.61%, 8.78%, 6.80% by using 20 projections; and 12.14%, 6.91% and 5.29% by using 60 projections. The ADRIC technique also demonstrated robustness to varying projection exposure levels. For the physical phantom study, the average DICE coefficient between the initial prior balloon volume and the new ‘gold-standard’ balloon volumes was 0.460. ADRIC reconstruction by 21 projections increased the average DICE coefficient to 0.954. Conclusion The ADRIC technique outperformed both the 2D-3D deformation technique and the ART-dTV technique in reconstruction accuracy. The alternately solved deformation field and non-deformation voxel intensity corrections can benefit multiple clinical applications, including tumor tracking, radiotherapy dose accumulation and treatment outcome analysis. PMID:28380247
Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers
NASA Technical Reports Server (NTRS)
Guruswamy, Guru; VanDalsem, William (Technical Monitor)
1994-01-01
Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.
Novel wavelength diversity technique for high-speed atmospheric turbulence compensation
NASA Astrophysics Data System (ADS)
Arrasmith, William W.; Sullivan, Sean F.
2010-04-01
The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.
Fiber Optic Communication System For Medical Images
NASA Astrophysics Data System (ADS)
Arenson, Ronald L.; Morton, Dan E.; London, Jack W.
1982-01-01
This paper discusses a fiber optic communication system linking ultrasound devices, Computerized tomography scanners, Nuclear Medicine computer system, and a digital fluoro-graphic system to a central radiology research computer. These centrally archived images are available for near instantaneous recall at various display consoles. When a suitable laser optical disk is available for mass storage, more extensive image archiving will be added to the network including digitized images of standard radiographs for comparison purposes and for remote display in such areas as the intensive care units, the operating room, and selected outpatient departments. This fiber optic system allows for a transfer of high resolution images in less than a second over distances exceeding 2,000 feet. The advantages of using fiber optic cables instead of typical parallel or serial communication techniques will be described. The switching methodology and communication protocols will also be discussed.
Camboulives, A-R; Velluet, M-T; Poulenard, S; Saint-Antonin, L; Michau, V
2018-02-01
An optical communication link performance between the ground and a geostationary satellite can be impaired by scintillation, beam wandering, and beam spreading due to its propagation through atmospheric turbulence. These effects on the link performance can be mitigated by tracking and error correction codes coupled with interleaving. Precise numerical tools capable of describing the irradiance fluctuations statistically and of creating an irradiance time series are needed to characterize the benefits of these techniques and optimize them. The wave optics propagation methods have proven their capability of modeling the effects of atmospheric turbulence on a beam, but these are known to be computationally intensive. We present an analytical-numerical model which provides good results on the probability density functions of irradiance fluctuations as well as a time series with an important saving of time and computational resources.
Knowing when to give up: early-rejection stratagems in ligand docking
NASA Astrophysics Data System (ADS)
Skone, Gwyn; Voiculescu, Irina; Cameron, Stephen
2009-10-01
Virtual screening is an important resource in the drug discovery community, of which protein-ligand docking is a significant part. Much software has been developed for this purpose, largely by biochemists and those in related disciplines, who pursue ever more accurate representations of molecular interactions. The resulting tools, however, are very processor-intensive. This paper describes some initial results from a project to review computational chemistry techniques for docking from a non-chemistry standpoint. An abstract blueprint for protein-ligand docking using empirical scoring functions is suggested, and this is used to discuss potential improvements. By introducing computer science tactics such as lazy function evaluation, dramatic increases to throughput can and have been realized using a real-world docking program. Naturally, they can be extended to any system that approximately corresponds to the architecture outlined.
NASA Technical Reports Server (NTRS)
Adamovsky, G.; Sherer, T. N.; Maitland, D. J.
1989-01-01
A novel technique to compensate for unwanted intensity losses in a fiber-optic sensing system is described. The technique involves a continuous sinusoidal modulation of the light source intensity at radio frequencies and an intensity sensor placed in an unbalanced interferometer. The system shows high sensitivity and stability.
Practical adaptive quantum tomography
NASA Astrophysics Data System (ADS)
Granade, Christopher; Ferrie, Christopher; Flammia, Steven T.
2017-11-01
We introduce a fast and accurate heuristic for adaptive tomography that addresses many of the limitations of prior methods. Previous approaches were either too computationally intensive or tailored to handle special cases such as single qubits or pure states. By contrast, our approach combines the efficiency of online optimization with generally applicable and well-motivated data-processing techniques. We numerically demonstrate these advantages in several scenarios including mixed states, higher-dimensional systems, and restricted measurements. http://cgranade.com complete data and source code for this work are available online [1], and can be previewed at https://goo.gl/koiWxR.
NASA Technical Reports Server (NTRS)
Kim, Jonnathan H.
1995-01-01
Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).
Resolution enhancement using simultaneous couple illumination
NASA Astrophysics Data System (ADS)
Hussain, Anwar; Martínez Fuentes, José Luis
2016-10-01
A super-resolution technique based on structured illumination created by a liquid crystal on silicon spatial light modulator (LCOS-SLM) is presented. Single and simultaneous pairs of tilted beams are generated to illuminate a target object. Resolution enhancement of an optical 4f system is demonstrated by using numerical simulations. The resulting intensity images are recorded at a charged couple device (CCD) and stored in the computer memory for further processing. One dimension enhancement can be performed with only 15 images. Two dimensional complete improvement requires 153 different images. The resolution of the optical system is extended three times compared to the band limited system.
An analog retina model for detecting dim moving objects against a bright moving background
NASA Technical Reports Server (NTRS)
Searfus, R. M.; Colvin, M. E.; Eeckman, F. H.; Teeters, J. L.; Axelrod, T. S.
1991-01-01
We are interested in applications that require the ability to track a dim target against a bright, moving background. Since the target signal will be less than or comparable to the variations in the background signal intensity, sophisticated techniques must be employed to detect the target. We present an analog retina model that adapts to the motion of the background in order to enhance targets that have a velocity difference with respect to the background. Computer simulation results and our preliminary concept of an analog 'Z' focal plane implementation are also presented.
Study of a programmable high speed processor for use on-board satellites
NASA Astrophysics Data System (ADS)
Degavre, J. Cl.; Okkes, R.; Gaillat, G.
The availability of VLSI programmable devices will significantly enhance satellite on-board data processing capabilities. A case study is presented which indicates that computation-intensive processing applications requiring the execution of 100 megainstructions/sec are within the CD power constraints of satellites. It is noted that the current progress in semicustom design technique development and in achievable gate array densities, together with the recent announcement of improved monochip processors, are encouraging the development of an on-board programmable processor architecture able to associate the devices that will appear in communication and military markets.
A numerical study of mixing enhancement in supersonic reacting flow fields. [in scramjets
NASA Technical Reports Server (NTRS)
Drummond, J. Philip; Mukunda, H. S.
1988-01-01
NASA Langley has intensively investigated the components of ramjet and scramjet systems for endoatmospheric, airbreathing hypersonic propulsion; attention is presently given to the optimization of scramjet combustor fuel-air mixing and reaction characteristics. A supersonic, spatially developing and reacting mixing layer has been found to serve as an excellent physical model for the mixing and reaction process. Attention is presently given to techniques that have been applied to the enhancement of the mixing processes and the overall combustion efficiency of the mixing layer. A fuel injector configuration has been computationally designed which significantly increases mixing and reaction rates.
Image Edge Extraction via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A. (Inventor); Klinko, Steve (Inventor)
2008-01-01
A computer-based technique for detecting edges in gray level digital images employs fuzzy reasoning to analyze whether each pixel in an image is likely on an edge. The image is analyzed on a pixel-by-pixel basis by analyzing gradient levels of pixels in a square window surrounding the pixel being analyzed. An edge path passing through the pixel having the greatest intensity gradient is used as input to a fuzzy membership function, which employs fuzzy singletons and inference rules to assigns a new gray level value to the pixel that is related to the pixel's edginess degree.
Fast polarimetric dehazing method for visibility enhancement in HSI colour space
NASA Astrophysics Data System (ADS)
Zhang, Wenfei; Liang, Jian; Ren, Liyong; Ju, Haijuan; Bai, Zhaofeng; Wu, Zhaoxin
2017-09-01
Image haze removal has attracted much attention in optics and computer vision fields in recent years due to its wide applications. In particular, the fast and real-time dehazing methods are of significance. In this paper, we propose a fast dehazing method in hue, saturation and intensity colour space based on the polarimetric imaging technique. We implement the polarimetric dehazing method in the intensity channel, and the colour distortion of the image is corrected using the white patch retinex method. This method not only reserves the detailed information restoration capacity, but also improves the efficiency of the polarimetric dehazing method. Comparison studies with state of the art methods demonstrate that the proposed method obtains equal or better quality results and moreover the implementation is much faster. The proposed method is promising in real-time image haze removal and video haze removal applications.
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
Causal Structure of Brain Physiology after Brain Injury from Subarachnoid Hemorrhage.
Claassen, Jan; Rahman, Shah Atiqur; Huang, Yuxiao; Frey, Hans-Peter; Schmidt, J Michael; Albers, David; Falo, Cristina Maria; Park, Soojin; Agarwal, Sachin; Connolly, E Sander; Kleinberg, Samantha
2016-01-01
High frequency physiologic data are routinely generated for intensive care patients. While massive amounts of data make it difficult for clinicians to extract meaningful signals, these data could provide insight into the state of critically ill patients and guide interventions. We develop uniquely customized computational methods to uncover the causal structure within systemic and brain physiologic measures recorded in a neurological intensive care unit after subarachnoid hemorrhage. While the data have many missing values, poor signal-to-noise ratio, and are composed from a heterogeneous patient population, our advanced imputation and causal inference techniques enable physiologic models to be learned for individuals. Our analyses confirm that complex physiologic relationships including demand and supply of oxygen underlie brain oxygen measurements and that mechanisms for brain swelling early after injury may differ from those that develop in a delayed fashion. These inference methods will enable wider use of ICU data to understand patient physiology.
Simultaneous detection of iodine and iodide on boron doped diamond electrodes.
Fierro, Stéphane; Comninellis, Christos; Einaga, Yasuaki
2013-01-15
Individual and simultaneous electrochemical detection of iodide and iodine has been performed via cyclic voltammetry on boron doped diamond (BDD) electrodes in a 1M NaClO(4) (pH 8) solution, representative of typical environmental water conditions. It is feasible to compute accurate calibration curve for both compounds using cyclic voltammetry measurements by determining the peak current intensities as a function of the concentration. A lower detection limit of about 20 μM was obtained for iodide and 10 μM for iodine. Based on the comparison between the peak current intensities reported during the oxidation of KI, it is probable that iodide (I(-)) is first oxidized in a single step to yield iodine (I(2)). The latter is further oxidized to obtain IO(3)(-). This technique, however, did not allow for a reasonably accurate detection of iodate (IO(3)(-)) on a BDD electrode. Copyright © 2012 Elsevier B.V. All rights reserved.
Lin, Yu-Hsiu; Hu, Yu-Chen
2018-04-27
The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%.
NASA Astrophysics Data System (ADS)
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-01
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-16
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Qin, Yuan; Michalowski, Andreas; Weber, Rudolf; Yang, Sen; Graf, Thomas; Ni, Xiaowu
2012-11-19
Ray-tracing is the commonly used technique to calculate the absorption of light in laser deep-penetration welding or drilling. Since new lasers with high brilliance enable small capillaries with high aspect ratios, diffraction might become important. To examine the applicability of the ray-tracing method, we studied the total absorptance and the absorbed intensity of polarized beams in several capillary geometries. The ray-tracing results are compared with more sophisticated simulations based on physical optics. The comparison shows that the simple ray-tracing is applicable to calculate the total absorptance in triangular grooves and in conical capillaries but not in rectangular grooves. To calculate the distribution of the absorbed intensity ray-tracing fails due to the neglected interference, diffraction, and the effects of beam propagation in the capillaries with sub-wavelength diameter. If diffraction is avoided e.g. with beams smaller than the entrance pupil of the capillary or with very shallow capillaries, the distribution of the absorbed intensity calculated by ray-tracing corresponds to the local average of the interference pattern found by physical optics.
NASA Astrophysics Data System (ADS)
Wang, Zhi-peng; Zhang, Shuai; Liu, Hong-zhao; Qin, Yi
2014-12-01
Based on phase retrieval algorithm and QR code, a new optical encryption technology that only needs to record one intensity distribution is proposed. In this encryption process, firstly, the QR code is generated from the information to be encrypted; and then the generated QR code is placed in the input plane of 4-f system to have a double random phase encryption. For only one intensity distribution in the output plane is recorded as the ciphertext, the encryption process is greatly simplified. In the decryption process, the corresponding QR code is retrieved using phase retrieval algorithm. A priori information about QR code is used as support constraint in the input plane, which helps solve the stagnation problem. The original information can be recovered without distortion by scanning the QR code. The encryption process can be implemented either optically or digitally, and the decryption process uses digital method. In addition, the security of the proposed optical encryption technology is analyzed. Theoretical analysis and computer simulations show that this optical encryption system is invulnerable to various attacks, and suitable for harsh transmission conditions.
A parallel computing engine for a class of time critical processes.
Nabhan, T M; Zomaya, A Y
1997-01-01
This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.
Vision-Based UAV Flight Control and Obstacle Avoidance
2006-01-01
denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion
From cosmos to connectomes: the evolution of data-intensive science.
Burns, Randal; Vogelstein, Joshua T; Szalay, Alexander S
2014-09-17
The analysis of data requires computation: originally by hand and more recently by computers. Different models of computing are designed and optimized for different kinds of data. In data-intensive science, the scale and complexity of data exceeds the comfort zone of local data stores on scientific workstations. Thus, cloud computing emerges as the preeminent model, utilizing data centers and high-performance clusters, enabling remote users to access and query subsets of the data efficiently. We examine how data-intensive computational systems originally built for cosmology, the Sloan Digital Sky Survey (SDSS), are now being used in connectomics, at the Open Connectome Project. We list lessons learned and outline the top challenges we expect to face. Success in computational connectomics would drastically reduce the time between idea and discovery, as SDSS did in cosmology. Copyright © 2014 Elsevier Inc. All rights reserved.
Nonlinear tuning techniques of plasmonic nano-filters
NASA Astrophysics Data System (ADS)
Kotb, Rehab; Ismail, Yehea; Swillam, Mohamed A.
2015-02-01
In this paper, a fitting model to the propagation constant and the losses of Metal-Insulator-Metal (MIM) plasmonic waveguide is proposed. Using this model, the modal characteristics of MIM plasmonic waveguide can be solved directly without solving Maxwell's equations from scratch. As a consequence, the simulation time and the computational cost that are needed to predict the response of different plasmonic structures can be reduced significantly. This fitting model is used to develop a closed form model that describes the behavior of a plasmonic nano-filter. Easy and accurate mechanisms to tune the filter are investigated and analyzed. The filter tunability is based on using a nonlinear dielectric material with Pockels or Kerr effect. The tunability is achieved by applying an external voltage or through controlling the input light intensity. The proposed nano-filter supports both red and blue shift in the resonance response depending on the type of the used non-linear material. A new approach to control the input light intensity by applying an external voltage to a previous stage is investigated. Therefore, the filter tunability to a stage that has Kerr material can be achieved by applying voltage to a previous stage that has Pockels material. Using this method, the Kerr effect can be achieved electrically instead of varying the intensity of the input source. This technique enhances the ability of the device integration for on-chip applications. Tuning the resonance wavelength with high accuracy, minimum insertion loss and high quality factor is obtained using these approaches.
Equivalent reduced model technique development for nonlinear system dynamic response
NASA Astrophysics Data System (ADS)
Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet
2013-04-01
The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.
Correction factors for on-line microprobe analysis of multielement alloy systems
NASA Technical Reports Server (NTRS)
Unnam, J.; Tenney, D. R.; Brewer, W. D.
1977-01-01
An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.
Unification of color postprocessing techniques for 3-dimensional computational mechanics
NASA Technical Reports Server (NTRS)
Bailey, Bruce Charles
1985-01-01
To facilitate the understanding of complex three-dimensional numerical models, advanced interactive color postprocessing techniques are introduced. These techniques are sufficiently flexible so that postprocessing difficulties arising from model size, geometric complexity, response variation, and analysis type can be adequately overcome. Finite element, finite difference, and boundary element models may be evaluated with the prototype postprocessor. Elements may be removed from parent models to be studied as independent subobjects. Discontinuous responses may be contoured including responses which become singular, and nonlinear color scales may be input by the user for the enhancement of the contouring operation. Hit testing can be performed to extract precise geometric, response, mesh, or material information from the database. In addition, stress intensity factors may be contoured along the crack front of a fracture model. Stepwise analyses can be studied, and the user can recontour responses repeatedly, as if he were paging through the response sets. As a system, these tools allow effective interpretation of complex analysis results.
3D Acoustic Full Waveform Inversion for Engineering Purpose
NASA Astrophysics Data System (ADS)
Lim, Y.; Shin, S.; Kim, D.; Kim, S.; Chung, W.
2017-12-01
Seismic waveform inversion is the most researched data processing technique. In recent years, with an increase in marine development projects, seismic surveys are commonly conducted for engineering purposes; however, researches for application of waveform inversion are insufficient. The waveform inversion updates the subsurface physical property by minimizing the difference between modeled and observed data. Furthermore, it can be used to generate an accurate subsurface image; however, this technique consumes substantial computational resources. Its most compute-intensive step is the calculation of the gradient and hessian values. This aspect gains higher significance in 3D as compared to 2D. This paper introduces a new method for calculating gradient and hessian values, in an effort to reduce computational overburden. In the conventional waveform inversion, the calculation area covers all sources and receivers. In seismic surveys for engineering purposes, the number of receivers is limited. Therefore, it is inefficient to construct the hessian and gradient for the entire region (Figure 1). In order to tackle this problem, we calculate the gradient and the hessian for a single shot within the range of the relevant source and receiver. This is followed by summing up of these positions for the entire shot (Figure 2). In this paper, we demonstrate that reducing the area of calculation of the hessian and gradient for one shot reduces the overall amount of computation and therefore, the computation time. Furthermore, it is proved that the waveform inversion can be suitably applied for engineering purposes. In future research, we propose to ascertain an effective calculation range. This research was supported by the Basic Research Project(17-3314) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.
Scalable computing for evolutionary genomics.
Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert
2012-01-01
Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.
ERIC Educational Resources Information Center
Everhart, Julie M.; Alber-Morgan, Sheila R.; Park, Ju Hee
2011-01-01
This study investigated the effects of computer-based practice on the acquisition and maintenance of basic academic skills for two children with moderate to intensive disabilities. The special education teacher created individualized computer games that enabled the participants to independently practice academic skills that corresponded with their…
NASA Astrophysics Data System (ADS)
Naha, Pratap C.; Lau, Kristen C.; Hsu, Jessica C.; Hajfathalian, Maryam; Mian, Shaameen; Chhour, Peter; Uppuluri, Lahari; McDonald, Elizabeth S.; Maidment, Andrew D. A.; Cormode, David P.
2016-07-01
Earlier detection of breast cancer reduces mortality from this disease. As a result, the development of better screening techniques is a topic of intense interest. Contrast-enhanced dual-energy mammography (DEM) is a novel technique that has improved sensitivity for cancer detection. However, the development of contrast agents for this technique is in its infancy. We herein report gold-silver alloy nanoparticles (GSAN) that have potent DEM contrast properties and improved biocompatibility. GSAN formulations containing a range of gold : silver ratios and capped with m-PEG were synthesized and characterized using various analytical methods. DEM and computed tomography (CT) phantom imaging showed that GSAN produced robust contrast that was comparable to silver alone. Cell viability, reactive oxygen species generation and DNA damage results revealed that the formulations with 30% or higher gold content are cytocompatible to Hep G2 and J774A.1 cells. In vivo imaging was performed in mice with and without breast tumors. The results showed that GSAN produce strong DEM and CT contrast and accumulated in tumors. Furthermore, both in vivo imaging and ex vivo analysis indicated the excretion of GSAN via both urine and feces. In summary, GSAN produce strong DEM and CT contrast, and has potential for both blood pool imaging and for breast cancer screening.Earlier detection of breast cancer reduces mortality from this disease. As a result, the development of better screening techniques is a topic of intense interest. Contrast-enhanced dual-energy mammography (DEM) is a novel technique that has improved sensitivity for cancer detection. However, the development of contrast agents for this technique is in its infancy. We herein report gold-silver alloy nanoparticles (GSAN) that have potent DEM contrast properties and improved biocompatibility. GSAN formulations containing a range of gold : silver ratios and capped with m-PEG were synthesized and characterized using various analytical methods. DEM and computed tomography (CT) phantom imaging showed that GSAN produced robust contrast that was comparable to silver alone. Cell viability, reactive oxygen species generation and DNA damage results revealed that the formulations with 30% or higher gold content are cytocompatible to Hep G2 and J774A.1 cells. In vivo imaging was performed in mice with and without breast tumors. The results showed that GSAN produce strong DEM and CT contrast and accumulated in tumors. Furthermore, both in vivo imaging and ex vivo analysis indicated the excretion of GSAN via both urine and feces. In summary, GSAN produce strong DEM and CT contrast, and has potential for both blood pool imaging and for breast cancer screening. Electronic supplementary information (ESI) available: Reactive oxygen species generation and DNA damage methods, stability of GSAN in PBS, step phantom images and a DEM image of a gold nanoparticle phantom, GSAN CT phantom results. See DOI: 10.1039/c6nr02618d
NASA Astrophysics Data System (ADS)
Wan, Junwei; Chen, Hongyan; Zhao, Jing
2017-08-01
According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-08
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.
A Distributive, Non-Destructive, Real-Time Approach to Snowpack Monitoring
NASA Technical Reports Server (NTRS)
Frolik, Jeff; Skalka, Christian
2012-01-01
This invention is designed to ascertain the snow water equivalence (SWE) of snowpacks with better spatial and temporal resolutions than present techniques. The approach is ground-based, as opposed to some techniques that are air-based. In addition, the approach is compact, non-destructive, and can be communicated with remotely, and thus can be deployed in areas not possible with current methods. Presently there are two principal ground-based techniques for obtaining SWE measurements. The first is manual snow core measurements of the snowpack. This approach is labor-intensive, destructive, and has poor temporal resolution. The second approach is to deploy a large (e.g., 3x3 m) snowpillow, which requires significant infrastructure, is potentially hazardous [uses a approximately equal to 200-gallon (approximately equal to 760-L) antifreeze-filled bladder], and requires deployment in a large, flat area. High deployment costs necessitate few installations, thus yielding poor spatial resolution of data. Both approaches have limited usefulness in complex and/or avalanche-prone terrains. This approach is compact, non-destructive to the snowpack, provides high temporal resolution data, and due to potential low cost, can be deployed with high spatial resolution. The invention consists of three primary components: a robust wireless network and computing platform designed for harsh climates, new SWE sensing strategies, and algorithms for smart sampling, data logging, and SWE computation.
Thonusin, Chanisa; IglayReger, Heidi B; Soni, Tanu; Rothberg, Amy E; Burant, Charles F; Evans, Charles R
2017-11-10
In recent years, mass spectrometry-based metabolomics has increasingly been applied to large-scale epidemiological studies of human subjects. However, the successful use of metabolomics in this context is subject to the challenge of detecting biologically significant effects despite substantial intensity drift that often occurs when data are acquired over a long period or in multiple batches. Numerous computational strategies and software tools have been developed to aid in correcting for intensity drift in metabolomics data, but most of these techniques are implemented using command-line driven software and custom scripts which are not accessible to all end users of metabolomics data. Further, it has not yet become routine practice to assess the quantitative accuracy of drift correction against techniques which enable true absolute quantitation such as isotope dilution mass spectrometry. We developed an Excel-based tool, MetaboDrift, to visually evaluate and correct for intensity drift in a multi-batch liquid chromatography - mass spectrometry (LC-MS) metabolomics dataset. The tool enables drift correction based on either quality control (QC) samples analyzed throughout the batches or using QC-sample independent methods. We applied MetaboDrift to an original set of clinical metabolomics data from a mixed-meal tolerance test (MMTT). The performance of the method was evaluated for multiple classes of metabolites by comparison with normalization using isotope-labeled internal standards. QC sample-based intensity drift correction significantly improved correlation with IS-normalized data, and resulted in detection of additional metabolites with significant physiological response to the MMTT. The relative merits of different QC-sample curve fitting strategies are discussed in the context of batch size and drift pattern complexity. Our drift correction tool offers a practical, simplified approach to drift correction and batch combination in large metabolomics studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Patni, Nidhi; Burela, Nagarjuna; Pasricha, Rajesh; Goyal, Jaishree; Soni, Tej Prakash; Kumar, T Senthil; Natarajan, T
2017-01-01
To achieve the best possible therapeutic ratio using high-precision techniques (image-guided radiation therapy/volumetric modulated arc therapy [IGRT/VMAT]) of external beam radiation therapy in cases of carcinoma cervix using kilovoltage cone-beam computed tomography (kV-CBCT). One hundred and five patients of gynecological malignancies who were treated with IGRT (IGRT/VMAT) were included in the study. CBCT was done once a week for intensity-modulated radiation therapy and daily in IGRT/VMAT. These images were registered with the planning CT scan images and translational errors were applied and recorded. In all, 2078 CBCT images were studied. The margins of planning target volume were calculated from the variations in the setup. The setup variation was 5.8, 10.3, and 5.6 mm in anteroposterior, superoinferior, and mediolateral direction. This allowed adequate dose delivery to the clinical target volume and the sparing of organ at risks. Daily kV-CBCT is a satisfactory method of accurate patient positioning in treating gynecological cancers with high-precision techniques. This resulted in avoiding geographic miss.
Optical investigations of nanostructured oxides and semiconductors
NASA Astrophysics Data System (ADS)
Irvin, Patrick Richard
This work is motivated by the prospect of building a quantum computer: a device that would allow physicists to explore quantum mechanics more deeply, and allow everyone else to keep their credit card numbers safe on the Internet. In this thesis we explore two classes of materials that are relevant to a proposed quantum computer architecture: oxides and semiconductors. Systems with a ferroelectric to paraelectric transition in the vicinity of room temperature are useful for devices. We investigate strained-SrTiO 3, which is ferroelectric at room-temperature, and a composite material of (Ba,Sr)TiO3 and MgO. We present optical techniques to measure electron spin dynamics with GHz dynamical bandwidth, transform-limited spectral selectivity, and phase-sensitive detection. We demonstrate this technique by measuring GHz-spin precession in n-GaAs. We also describe our efforts to optically probe InAs/GaAs and GaAs/AlGaAs quantum dots. Nanoscale devices with photonic properties have been the subject of intense research over the past decade. Potential nanophotonic applications include communications, polarization-sensitive detectors, and solar power generation. Here we show photosensitivity of a nanoscale detector written at the interface between two oxides.
Z2Pack: Numerical implementation of hybrid Wannier centers for identifying topological materials
NASA Astrophysics Data System (ADS)
Gresch, Dominik; Autès, Gabriel; Yazyev, Oleg V.; Troyer, Matthias; Vanderbilt, David; Bernevig, B. Andrei; Soluyanov, Alexey A.
2017-02-01
The intense theoretical and experimental interest in topological insulators and semimetals has established band structure topology as a fundamental material property. Consequently, identifying band topologies has become an important, but often challenging, problem, with no exhaustive solution at the present time. In this work we compile a series of techniques, some previously known, that allow for a solution to this problem for a large set of the possible band topologies. The method is based on tracking hybrid Wannier charge centers computed for relevant Bloch states, and it works at all levels of materials modeling: continuous k .p models, tight-binding models, and ab initio calculations. We apply the method to compute and identify Chern, Z2, and crystalline topological insulators, as well as topological semimetal phases, using real material examples. Moreover, we provide a numerical implementation of this technique (the Z2Pack software package) that is ideally suited for high-throughput screening of materials databases for compounds with nontrivial topologies. We expect that our work will allow researchers to (a) identify topological materials optimal for experimental probes, (b) classify existing compounds, and (c) reveal materials that host novel, not yet described, topological states.
Computer-aided diagnosis of cavernous malformations in brain MR images.
Wang, Huiquan; Ahmed, S Nizam; Mandal, Mrinal
2018-06-01
Cavernous malformation or cavernoma is one of the most common epileptogenic lesions. It is a type of brain vessel abnormality that can cause serious symptoms such as seizures, intracerebral hemorrhage, and various neurological disorders. Manual detection of cavernomas by physicians in a large set of brain MRI slices is a time-consuming and labor-intensive task and often delays diagnosis. In this paper, we propose a computer-aided diagnosis (CAD) system for cavernomas based on T2-weighted axial plane MRI image analysis. The proposed technique first extracts the brain area based on atlas registration and active contour model, and then performs template matching to obtain candidate cavernoma regions. Texture, the histogram of oriented gradients and local binary pattern features of each candidate region are calculated, and principal component analysis is applied to reduce the feature dimensionality. Support vector machines (SVMs) are finally used to classify each region into cavernoma or non-cavernoma so that most of the false positives (obtained by template matching) are eliminated. The performance of the proposed CAD system is evaluated and experimental results show that it provides superior performance in cavernoma detection compared to existing techniques. Copyright © 2018 Elsevier Ltd. All rights reserved.
Analysis of intensity variability in multislice and cone beam computed tomography.
Nackaerts, Olivia; Maes, Frederik; Yan, Hua; Couto Souza, Paulo; Pauwels, Ruben; Jacobs, Reinhilde
2011-08-01
The aim of this study was to evaluate the variability of intensity values in cone beam computed tomography (CBCT) imaging compared with multislice computed tomography Hounsfield units (MSCT HU) in order to assess the reliability of density assessments using CBCT images. A quality control phantom was scanned with an MSCT scanner and five CBCT scanners. In one CBCT scanner, the phantom was scanned repeatedly in the same and in different positions. Images were analyzed using registration to a mathematical model. MSCT images were used as a reference. Density profiles of MSCT showed stable HU values, whereas in CBCT imaging the intensity values were variable over the profile. Repositioning of the phantom resulted in large fluctuations in intensity values. The use of intensity values in CBCT images is not reliable, because the values are influenced by device, imaging parameters and positioning. © 2011 John Wiley & Sons A/S.
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
NASA Technical Reports Server (NTRS)
Ahamed, Aakash; Bolten, John; Doyle, Colin; Fayne, Jessica
2016-01-01
Floods are the costliest natural disaster, causing approximately 6.8 million deaths in the twentieth century alone. Worldwide economic flood damage estimates in 2012 exceed $19 Billion USD. Extended duration floods also pose longer term threats to food security, water, sanitation, hygiene, and community livelihoods, particularly in developing countries. Projections by the Intergovernmental Panel on Climate Change (IPCC) suggest that precipitation extremes, rainfall intensity, storm intensity, and variability are increasing due to climate change. Increasing hydrologic uncertainty will likely lead to unprecedented extreme flood events. As such, there is a vital need to enhance and further develop traditional techniques used to rapidly assess flooding and extend analytical methods to estimate impacted population and infrastructure. Measuring flood extent in situ is generally impractical, time consuming, and can be inaccurate. Remotely sensed imagery acquired from space-borne and airborne sensors provides a viable platform for consistent and rapid wall-to-wall monitoring of large flood events through time. Terabytes of freely available satellite imagery are made available online each day by NASA, ESA, and other international space research institutions. Advances in cloud computing and data storage technologies allow researchers to leverage these satellite data and apply analytical methods at scale. Repeat-survey earth observations help provide insight about how natural phenomena change through time, including the progression and recession of floodwaters. In recent years, cloud-penetrating radar remote sensing techniques (e.g., Synthetic Aperture Radar) and high temporal resolution imagery platforms (e.g., MODIS and its 1-day return period), along with high performance computing infrastructure, have enabled significant advances in software systems that provide flood warning, assessments, and hazard reduction potential. By incorporating social and economic data, researchers can develop systems that automatically quantify the socioeconomic impacts resulting from flood disaster events.
HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation
NASA Technical Reports Server (NTRS)
Sterling, Thomas; Bergman, Larry
2000-01-01
Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.
Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak
2011-01-01
This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range of experiments. A measure of comparison quality is de ned, which consists of the percent overlap of the predicted uncertainty bar with the corresponding measurement uncertainty bar. For nearly all cases, this percent overlap is greater than zero, and for most of the higher temperature cases (T >13,000 K) it is greater than 50%. These favorable comparisons provide evidence that the baseline computational technique and uncertainty analysis presented in Part I are adequate for Mars-return simulations. In Part III, the computational technique and uncertainty analysis presented in Part I are applied to EAST shock-tube cases. These experimental cases contain wavelength dependent intensity measurements in a wavelength range that covers 60% of the radiative intensity for the 11 km/s, 5 m radius flight case studied in Part I. Comparisons between the predictions and EAST measurements are made for a range of experiments. The uncertainty analysis presented in Part I is applied to each prediction, and comparisons are made using the metrics defined in Part II. The agreement between predictions and measurements is excellent for velocities greater than 10.5 km/s. Both the wavelength dependent and wavelength integrated intensities agree within 30% for nearly all cases considered. This agreement provides confidence in the computational technique and uncertainty analysis presented in Part I, and provides further evidence that this approach is adequate for Mars-return simulations. Part IV of this paper reviews existing experimental data that include the influence of massive ablation on radiative heating. It is concluded that this existing data is not sufficient for the present uncertainty analysis. Experiments to capture the influence of massive ablation on radiation are suggested as future work, along with further studies of the radiative precursor and improvements in the radiation properties of ablation products.
Uses of megavoltage digital tomosynthesis in radiotherapy
NASA Astrophysics Data System (ADS)
Sarkar, Vikren
With the advent of intensity modulated radiotherapy, radiation treatment plans are becoming more conformal to the tumor with the decreasing margins. It is therefore of prime importance that the patient be positioned correctly prior to treatment. Therefore, image guided treatment is necessary for intensity modulated radiotherapy plans to be implemented successfully. Current advanced imaging devices require costly hardware and software upgrade, and radiation imaging solutions, such as cone beam computed tomography, may introduce extra radiation dose to the patient in order to acquire better quality images. Thus, there is a need to extend current existing imaging device ability and functions while reducing cost and radiation dose. Existing electronic portal imaging devices can be used to generate computed tomography-like tomograms through projection images acquired over a small angle using the technique of cone-beam digital tomosynthesis. Since it uses a fraction of the images required for computed tomography reconstruction, use of this technique correspondingly delivers only a fraction of the imaging dose to the patient. Furthermore, cone-beam digital tomosynthesis can be offered as a software-only solution as long as a portal imaging device is available. In this study, the feasibility of performing digital tomosynthesis using individually-acquired megavoltage images from a charge coupled device-based electronic portal imaging device was investigated. Three digital tomosynthesis reconstruction algorithms, the shift-and-add, filtered back-projection, and simultaneous algebraic reconstruction technique, were compared considering the final image quality and radiation dose during imaging. A software platform, DART, was created using a combination of the Matlab and C++ languages. The platform allows for the registration of a reference Cone Beam Digital Tomosynthesis (CBDT) image against a daily acquired set to determine how to shift the patient prior to treatment. Finally, the software was extended to investigate if the digital tomosynthesis dataset could be used in an adaptive radiotherapy regimen through the use of the Pinnacle treatment planning software to recalculate dose delivered. The feasibility study showed that the megavoltage CBDT visually agreed with corresponding megavoltage computed tomography images. The comparative study showed that the best compromise between imaging quality and imaging dose is obtained when 11 projection images, acquired over an imaging angle of 40°, are used with the filtered back-projection algorithm. DART was successfully used to register reference and daily image sets to within 1 mm in-plane and 2.5 mm out of plane. The DART platform was also effectively used to generate updated files that the Pinnacle treatment planning system used to calculate updated dose in a rigidly shifted patient. These doses were then used to calculate a cumulative dose distribution that could be used by a physician as reference to decide when the treatment plan should be updated. In conclusion, this study showed that a software solution is possible to extend existing electronic portal imaging devices to function as cone-beam digital tomosynthesis devices and achieve daily requirement for image guided intensity modulated radiotherapy treatments. The DART platform also has the potential to be used as a part of adaptive radiotherapy solution.
NASA Astrophysics Data System (ADS)
Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley
2015-04-01
The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially increasing data volumes at NCI. Traditional HPC and data environments are still made available in a way that flexibly provides the tools, services and supporting software systems on these new petascale infrastructures. But to enable the research to take place at this scale, the data, metadata and software now need to evolve together - creating a new integrated high performance infrastructure. The new infrastructure at NCI currently supports a catalogue of integrated, reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. One of the challenges for NCI has been to support existing techniques and methods, while carefully preparing the underlying infrastructure for the transition needed for the next class of Data-intensive Science. In doing so, a flexible range of techniques and software can be made available for application across the corpus of data collections available, and to provide a new infrastructure for future interdisciplinary research.
An optical flow-based method for velocity field of fluid flow estimation
NASA Astrophysics Data System (ADS)
Głomb, Grzegorz; Świrniak, Grzegorz; Mroczka, Janusz
2017-06-01
The aim of this paper is to present a method for estimating flow-velocity vector fields using the Lucas-Kanade algorithm. The optical flow measurements are based on the Particle Image Velocimetry (PIV) technique, which is commonly used in fluid mechanics laboratories in both research institutes and industry. Common approaches for an optical characterization of velocity fields base on computation of partial derivatives of the image intensity using finite differences. Nevertheless, the accuracy of velocity field computations is low due to the fact that an exact estimation of spatial derivatives is very difficult in presence of rapid intensity changes in the PIV images, caused by particles having small diameters. The method discussed in this paper solves this problem by interpolating the PIV images using Gaussian radial basis functions. This provides a significant improvement in the accuracy of the velocity estimation but, more importantly, allows for the evaluation of the derivatives in intermediate points between pixels. Numerical analysis proves that the method is able to estimate even a separate vector for each particle with a 5× 5 px2 window, whereas a classical correlation-based method needs at least 4 particle images. With the use of a specialized multi-step hybrid approach to data analysis the method improves the estimation of the particle displacement far above 1 px.
A preliminary study of molecular dynamics on reconfigurable computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolinski, C.; Trouw, F. R.; Gokhale, M.
2003-01-01
In this paper we investigate the performance of platform FPGAs on a compute-intensive, floating-point-intensive supercomputing application, Molecular Dynamics (MD). MD is a popular simulation technique to track interacting particles through time by integrating their equations of motion. One part of the MD algorithm was implemented using the Fabric Generator (FG)[l I ] and mapped onto several reconfigurable logic arrays. FG is a Java-based toolset that greatly accelerates construction of the fabrics from an abstract technology independent representation. Our experiments used technology-independent IEEE 32-bit floating point operators so that the design could be easily re-targeted. Experiments were performed using both non-pipelinedmore » and pipelined floating point modules. We present results for the Altera Excalibur ARM System on a Programmable Chip (SoPC), the Altera Strath EPlS80, and the Xilinx Virtex-N Pro 2VP.50. The best results obtained were 5.69 GFlops at 8OMHz(Altera Strath EPlS80), and 4.47 GFlops at 82 MHz (Xilinx Virtex-II Pro 2VF50). Assuming a lOWpower budget, these results compare very favorably to a 4Gjlop/40Wprocessing/power rate for a modern Pentium, suggesting that reconfigurable logic can achieve high performance at low power on jloating-point-intensivea pplications.« less
Resolution enhancement of fiber Bragg grating temperature sensor using a cavity ring-down technique
NASA Astrophysics Data System (ADS)
Yarai, Atsushi; Hara, Katsuyuki
2018-02-01
A new technique for enhancing the measurement resolution of a fiber Bragg grating (FBG) temperature sensor is proposed. This technique uses a cavity ring-down approach to amplify optical intensity by accumulating unremarkable intensity changes. A wavelength-stabilized optical pulse with a width of 10 ns rotates several times inside an optical fiber loop that contains a FBG sensor. In other words, the loop system functions as an integrator of slight intensity transition. A temperature resolution of at least 0.02 °C was achieved at 20.0 °C. Resolution with this technique is at least five times higher than previous techniques.
Optimization of combined electron and photon beams for breast cancer
NASA Astrophysics Data System (ADS)
Xiong, W.; Li, J.; Chen, L.; Price, R. A.; Freedman, G.; Ding, M.; Qin, L.; Yang, J.; Ma, C.-M.
2004-05-01
Recently, intensity-modulated radiation therapy and modulated electron radiotherapy have gathered a growing interest for the treatment of breast and head and neck tumours. In this work, we carried out a study to combine electron and photon beams to achieve differential dose distributions for multiple target volumes simultaneously. A Monte Carlo based treatment planning system was investigated, which consists of a set of software tools to perform accurate dose calculation, treatment optimization, leaf sequencing and plan analysis. We compared breast treatment plans generated using this home-grown optimization and dose calculation software for different treatment techniques. Five different planning techniques have been developed for this study based on a standard photon beam whole breast treatment and an electron beam tumour bed cone down. Technique 1 includes two 6 MV tangential wedged photon beams followed by an anterior boost electron field. Technique 2 includes two 6 MV tangential intensity-modulated photon beams and the same boost electron field. Technique 3 optimizes two intensity-modulated photon beams based on a boost electron field. Technique 4 optimizes two intensity-modulated photon beams and the weight of the boost electron field. Technique 5 combines two intensity-modulated photon beams with an intensity-modulated electron field. Our results show that technique 2 can reduce hot spots both in the breast and the tumour bed compared to technique 1 (dose inhomogeneity is reduced from 34% to 28% for the target). Techniques 3, 4 and 5 can deliver a more homogeneous dose distribution to the target (with dose inhomogeneities for the target of 22%, 20% and 9%, respectively). In many cases techniques 3, 4 and 5 can reduce the dose to the lung and heart. It is concluded that combined photon and electron beam therapy may be advantageous for treating breast cancer compared to conventional treatment techniques using tangential wedged photon beams followed by a boost electron field.
NASA Astrophysics Data System (ADS)
Luna, Byron Quan; Vidar Vangelsten, Bjørn; Liu, Zhongqiang; Eidsvig, Unni; Nadim, Farrokh
2013-04-01
Landslide risk must be assessed at the appropriate scale in order to allow effective risk management. At the moment, few deterministic models exist that can do all the computations required for a complete landslide risk assessment at a regional scale. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the models to compute the displacement with a large amount of individual initiation areas (computationally exhaustive). This paper presents a medium-scale, dynamic physical model for rapid mass movements in mountainous and volcanic areas. The deterministic nature of the approach makes it possible to apply it to other sites since it considers the frictional equilibrium conditions for the initiation process, the rheological resistance of the displaced flow for the run-out process and fragility curve that links intensity to economic loss for each building. The model takes into account the triggering effect of an earthquake, intense rainfall and a combination of both (spatial and temporal). The run-out module of the model considers the flow as a 2-D continuum medium solving the equations of mass balance and momentum conservation. The model is embedded in an open source environment geographical information system (GIS), it is computationally efficient and it is transparent (understandable and comprehensible) for the end-user. The model was applied to a virtual region, assessing landslide hazard, vulnerability and risk. A Monte Carlo simulation scheme was applied to quantify, propagate and communicate the effects of uncertainty in input parameters on the final results. In this technique, the input distributions are recreated through sampling and the failure criteria are calculated for each stochastic realisation of the site properties. The model is able to identify the released volumes of the critical slopes and the areas threatened by the run-out intensity. The obtained final outcome is the estimation of individual building damage and total economic risk. The research leading to these results has received funding from the European Community's Seventh Framework Programme [FP7/2007-2013] under grant agreement No 265138 New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe (MATRIX).
A boundary element alternating method for two-dimensional mixed-mode fracture problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Krishnamurthy, T.
1992-01-01
A boundary element alternating method, denoted herein as BEAM, is presented for two dimensional fracture problems. This is an iterative method which alternates between two solutions. An analytical solution for arbitrary polynomial normal and tangential pressure distributions applied to the crack faces of an embedded crack in an infinite plate is used as the fundamental solution in the alternating method. A boundary element method for an uncracked finite plate is the second solution. For problems of edge cracks a technique of utilizing finite elements with BEAM is presented to overcome the inherent singularity in boundary element stress calculation near the boundaries. Several computational aspects that make the algorithm efficient are presented. Finally, the BEAM is applied to a variety of two dimensional crack problems with different configurations and loadings to assess the validity of the method. The method gives accurate stress intensity factors with minimal computing effort.
Single-pixel computational ghost imaging with helicity-dependent metasurface hologram.
Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2017-09-01
Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security.
Cao, Yanpeng; Tisse, Christel-Loic
2014-02-01
In this Letter, we propose an efficient and accurate solution to remove temperature-dependent nonuniformity effects introduced by the imaging optics. This single-image-based approach computes optics-related fixed pattern noise (FPN) by fitting the derivatives of correction model to the gradient components, locally computed on an infrared image. A modified bilateral filtering algorithm is applied to local pixel output variations, so that the refined gradients are most likely caused by the nonuniformity associated with optics. The estimated bias field is subtracted from the raw infrared imagery to compensate the intensity variations caused by optics. The proposed method is fundamentally different from the existing nonuniformity correction (NUC) techniques developed for focal plane arrays (FPAs) and provides an essential image processing functionality to achieve completely shutterless NUC for uncooled long-wave infrared (LWIR) imaging systems.
NASA Astrophysics Data System (ADS)
Nair, B. G.; Winter, N.; Daniel, B.; Ward, R. M.
2016-07-01
Direct measurement of the flow of electric current during VAR is extremely difficult due to the aggressive environment as the arc process itself controls the distribution of current. In previous studies the technique of “magnetic source tomography” was presented; this was shown to be effective but it used a computationally intensive iterative method to analyse the distribution of arc centre position. In this paper we present faster computational methods requiring less numerical optimisation to determine the centre position of a single distributed arc both numerically and experimentally. Numerical validation of the algorithms were done on models and experimental validation on measurements based on titanium and nickel alloys (Ti6Al4V and INCONEL 718). The results are used to comment on the effects of process parameters on arc behaviour during VAR.
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Single-pixel computational ghost imaging with helicity-dependent metasurface hologram
Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2017-01-01
Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security. PMID:28913433
NASA Astrophysics Data System (ADS)
Vennila, P.; Govindaraju, M.; Venkatesh, G.; Kamal, C.
2016-05-01
Fourier transform - Infra red (FT-IR) and Fourier transform - Raman (FT-Raman) spectroscopic techniques have been carried out to analyze O-methoxy benzaldehyde (OMB) molecule. The fundamental vibrational frequencies and intensity of vibrational bands were evaluated using density functional theory (DFT). The vibrational analysis of stable isomer of OMB has been carried out by FT-IR and FT-Raman in combination with theoretical method simultaneously. The first-order hyperpolarizability and the anisotropy polarizability invariant were computed by DFT method. The atomic charges, hardness, softness, ionization potential, electronegativity, HOMO-LUMO energies, and electrophilicity index have been calculated. The 13C and 1H Nuclear magnetic resonance (NMR) have also been obtained by GIAO method. Molecular electronic potential (MEP) has been calculated by the DFT calculation method. Electronic excitation energies, oscillator strength and excited states characteristics were computed by the closed-shell singlet calculation method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele
2014-11-11
has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less
SCCT guidelines on radiation dose and dose-optimization strategies in cardiovascular CT
Halliburton, Sandra S.; Abbara, Suhny; Chen, Marcus Y.; Gentry, Ralph; Mahesh, Mahadevappa; Raff, Gilbert L.; Shaw, Leslee J.; Hausleiter, Jörg
2012-01-01
Over the last few years, computed tomography (CT) has developed into a standard clinical test for a variety of cardiovascular conditions. The emergence of cardiovascular CT during a period of dramatic increase in radiation exposure to the population from medical procedures and heightened concern about the subsequent potential cancer risk has led to intense scrutiny of the radiation burden of this new technique. This has hastened the development and implementation of dose reduction tools and prompted closer monitoring of patient dose. In an effort to aid the cardiovascular CT community in incorporating patient-centered radiation dose optimization and monitoring strategies into standard practice, the Society of Cardiovascular Computed Tomography has produced a guideline document to review available data and provide recommendations regarding interpretation of radiation dose indices and predictors of risk, appropriate use of scanner acquisition modes and settings, development of algorithms for dose optimization, and establishment of procedures for dose monitoring. PMID:21723512
Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.
Li, Yeqing; Liu, Wei; Huang, Junzhou
2018-06-01
Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.
Low Latency Workflow Scheduling and an Application of Hyperspectral Brightness Temperatures
NASA Astrophysics Data System (ADS)
Nguyen, P. T.; Chapman, D. R.; Halem, M.
2012-12-01
New system analytics for Big Data computing holds the promise of major scientific breakthroughs and discoveries from the exploration and mining of the massive data sets becoming available to the science community. However, such data intensive scientific applications face severe challenges in accessing, managing and analyzing petabytes of data. While the Hadoop MapReduce environment has been successfully applied to data intensive problems arising in business, there are still many scientific problem domains where limitations in the functionality of MapReduce systems prevent its wide adoption by those communities. This is mainly because MapReduce does not readily support the unique science discipline needs such as special science data formats, graphic and computational data analysis tools, maintaining high degrees of computational accuracies, and interfacing with application's existing components across heterogeneous computing processors. We address some of these limitations by exploiting the MapReduce programming model for satellite data intensive scientific problems and address scalability, reliability, scheduling, and data management issues when dealing with climate data records and their complex observational challenges. In addition, we will present techniques to support the unique Earth science discipline needs such as dealing with special science data formats (HDF and NetCDF). We have developed a Hadoop task scheduling algorithm that improves latency by 2x for a scientific workflow including the gridding of the EOS AIRS hyperspectral Brightness Temperatures (BT). This workflow processing algorithm has been tested at the Multicore Computing Center private Hadoop based Intel Nehalem cluster, as well as in a virtual mode under the Open Source Eucalyptus cloud. The 55TB AIRS hyperspectral L1b Brightness Temperature record has been gridded at the resolution of 0.5x1.0 degrees, and we have computed a 0.9 annual anti-correlation to the El Nino Southern oscillation in the Nino 4 region, as well as a 1.9 Kelvin decadal Arctic warming in the 4u and 12u spectral regions. Additionally, we will present the frequency of extreme global warming events by the use of a normalized maximum BT in a grid cell relative to its local standard deviation. A low-latency Hadoop scheduling environment maintains data integrity and fault tolerance in a MapReduce data intensive Cloud environment while improving the "time to solution" metric by 35% when compared to a more traditional parallel processing system for the same dataset. Our next step will be to improve the usability of our Hadoop task scheduling system, to enable rapid prototyping of data intensive experiments by means of processing "kernels". We will report on the performance and experience of implementing these experiments on the NEX testbed, and propose the use of a graphical directed acyclic graph (DAG) interface to help us develop on-demand scientific experiments. Our workflow system works within Hadoop infrastructure as a replacement for the FIFO or FairScheduler, thus the use of Apache "Pig" latin or other Apache tools may also be worth investigating on the NEX system to improve the usability of our workflow scheduling infrastructure for rapid experimentation.
Scalable Conjunction Processing using Spatiotemporally Indexed Ephemeris Data
NASA Astrophysics Data System (ADS)
Budianto-Ho, I.; Johnson, S.; Sivilli, R.; Alberty, C.; Scarberry, R.
2014-09-01
The collision warnings produced by the Joint Space Operations Center (JSpOC) are of critical importance in protecting U.S. and allied spacecraft against destructive collisions and protecting the lives of astronauts during space flight. As the Space Surveillance Network (SSN) improves its sensor capabilities for tracking small and dim space objects, the number of tracked objects increases from thousands to hundreds of thousands of objects, while the number of potential conjunctions increases with the square of the number of tracked objects. Classical filtering techniques such as apogee and perigee filters have proven insufficient. Novel and orders of magnitude faster conjunction analysis algorithms are required to find conjunctions in a timely manner. Stellar Science has developed innovative filtering techniques for satellite conjunction processing using spatiotemporally indexed ephemeris data that efficiently and accurately reduces the number of objects requiring high-fidelity and computationally-intensive conjunction analysis. Two such algorithms, one based on the k-d Tree pioneered in robotics applications and the other based on Spatial Hash Tables used in computer gaming and animation, use, at worst, an initial O(N log N) preprocessing pass (where N is the number of tracked objects) to build large O(N) spatial data structures that substantially reduce the required number of O(N^2) computations, substituting linear memory usage for quadratic processing time. The filters have been implemented as Open Services Gateway initiative (OSGi) plug-ins for the Continuous Anomalous Orbital Situation Discriminator (CAOS-D) conjunction analysis architecture. We have demonstrated the effectiveness, efficiency, and scalability of the techniques using a catalog of 100,000 objects, an analysis window of one day, on a 64-core computer with 1TB shared memory. Each algorithm can process the full catalog in 6 minutes or less, almost a twenty-fold performance improvement over the baseline implementation running on the same machine. We will present an overview of the algorithms and results that demonstrate the scalability of our concepts.
New Developments and Geoscience Applications of Synchrotron Computed Microtomography (Invited)
NASA Astrophysics Data System (ADS)
Rivers, M. L.; Wang, Y.; Newville, M.; Sutton, S. R.; Yu, T.; Lanzirotti, A.
2013-12-01
Computed microtomography is the extension to micron spatial resolution of the CAT scanning technique developed for medical imaging. Synchrotron sources are ideal for the method, since they provide a monochromatic, parallel beam with high intensity. High energy storage rings such as the Advanced Photon Source at Argonne National Laboratory produce x-rays with high energy, high brilliance, and high coherence. All of these factors combine to produce an extremely powerful imaging tool for earth science research. Techniques that have been developed include: - Absorption and phase contrast computed tomography with spatial resolution below one micron. - Differential contrast computed tomography, imaging above and below the absorption edge of a particular element. - High-pressure tomography, imaging inside a pressure cell at pressures above 10GPa. - High speed radiography and tomography, with 100 microsecond temporal resolution. - Fluorescence tomography, imaging the 3-D distribution of elements present at ppm concentrations. - Radiographic strain measurements during deformation at high confining pressure, combined with precise x-ray diffraction measurements to determine stress. These techniques have been applied to important problems in earth and environmental sciences, including: - The 3-D distribution of aqueous and organic liquids in porous media, with applications in contaminated groundwater and petroleum recovery. - The kinetics of bubble formation in magma chambers, which control explosive volcanism. - Studies of the evolution of the early solar system from 3-D textures in meteorites - Accurate crystal size distributions in volcanic systems, important for understanding the evolution of magma chambers. - The equation-of-state of amorphous materials at high pressure using both direct measurements of volume as a function of pressure and also by measuring the change x-ray absorption coefficient as a function of pressure. - The location and chemical speciation of toxic elements such as arsenic and nickel in soils and in plant tissues in contaminated Superfund sites. - The strength of earth materials under the pressure and temperature conditions of the Earth's mantle, providing insights into plate tectonics and the generation of earthquakes.
Mobile computing device configured to compute irradiance, glint, and glare of the sun
Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib
2014-03-11
Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tobler, Matt; Watson, Gordon; Leavitt, Dennis
Radiotherapy plays a key role in the definitive or adjuvant management of patients with mesothelioma of the pleural surface. Many patients are referred for radiation with intact lung following biopsy or subtotal pleurectomy. Delivery of efficacious doses of radiation to the pleural lining while avoiding lung parenchyma toxicity has been a difficult technical challenge. Using opposed photon fields produce doses in lung that result in moderate-to-severe pulmonary toxicity in 100% of patients treated. Combined photon-electron beam treatment, at total doses of 4250 cGy to the pleural surface, results in two-thirds of the lung volume receiving over 2100 cGy. We havemore » developed a technique using intensity-modulated photon arc therapy (IMRT) that significantly improves the dose distribution to the pleural surface with concomitant decrease in dose to lung parenchyma compared to traditional techniques. IMRT treatment of the pleural lining consists of segments of photon arcs that can be intensity modulated with varying beam weights and multileaf positions to produce a more uniform distribution to the pleural surface, while at the same time reducing the overall dose to the lung itself. Computed tomography (CT) simulation is critical for precise identification of target volumes as well as critical normal structures (lung and heart). Rotational arc trajectories and individual leaf positions and weightings are then defined for each CT plane within the patient. This paper will describe the proposed rotational IMRT technique and, using simulated isodose distributions, show the improved potential for sparing of dose to the critical structures of the lung, heart, and spinal cord.« less
A scientific workflow framework for (13)C metabolic flux analysis.
Dalman, Tolga; Wiechert, Wolfgang; Nöh, Katharina
2016-08-20
Metabolic flux analysis (MFA) with (13)C labeling data is a high-precision technique to quantify intracellular reaction rates (fluxes). One of the major challenges of (13)C MFA is the interactivity of the computational workflow according to which the fluxes are determined from the input data (metabolic network model, labeling data, and physiological rates). Here, the workflow assembly is inevitably determined by the scientist who has to consider interacting biological, experimental, and computational aspects. Decision-making is context dependent and requires expertise, rendering an automated evaluation process hardly possible. Here, we present a scientific workflow framework (SWF) for creating, executing, and controlling on demand (13)C MFA workflows. (13)C MFA-specific tools and libraries, such as the high-performance simulation toolbox 13CFLUX2, are wrapped as web services and thereby integrated into a service-oriented architecture. Besides workflow steering, the SWF features transparent provenance collection and enables full flexibility for ad hoc scripting solutions. To handle compute-intensive tasks, cloud computing is supported. We demonstrate how the challenges posed by (13)C MFA workflows can be solved with our approach on the basis of two proof-of-concept use cases. Copyright © 2015 Elsevier B.V. All rights reserved.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Computational steering of GEM based detector simulations
NASA Astrophysics Data System (ADS)
Sheharyar, Ali; Bouhali, Othmane
2017-10-01
Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.
Visual analysis of fluid dynamics at NASA's numerical aerodynamic simulation facility
NASA Technical Reports Server (NTRS)
Watson, Velvin R.
1991-01-01
A study aimed at describing and illustrating visualization tools used in Computational Fluid Dynamics (CFD) and indicating how these tools are likely to change by showing a projected resolution of the human computer interface is presented. The following are outlined using a graphically based test format: the revolution of human computer environments for CFD research; comparison of current environments; current environments with the ideal; predictions for the future CFD environments; what can be done to accelerate the improvements. The following comments are given: when acquiring visualization tools, potential rapid changes must be considered; environmental changes over the next ten years due to human computer interface cannot be fathomed; data flow packages such as AVS, apE, Explorer and Data Explorer are easy to learn and use for small problems, excellent for prototyping, but not so efficient for large problems; the approximation techniques used in visualization software must be appropriate for the data; it has become more cost effective to move jobs that fit on workstations and run only memory intensive jobs on the supercomputer; use of three dimensional skills will be maximized when the three dimensional environment is built in from the start.
Williams, Eric
2004-11-15
The total energy and fossil fuels used in producing a desktop computer with 17-in. CRT monitor are estimated at 6400 megajoules (MJ) and 260 kg, respectively. This indicates that computer manufacturing is energy intensive: the ratio of fossil fuel use to product weight is 11, an order of magnitude larger than the factor of 1-2 for many other manufactured goods. This high energy intensity of manufacturing, combined with rapid turnover in computers, results in an annual life cycle energy burden that is surprisingly high: about 2600 MJ per year, 1.3 times that of a refrigerator. In contrast with many home appliances, life cycle energy use of a computer is dominated by production (81%) as opposed to operation (19%). Extension of usable lifespan (e.g. by reselling or upgrading) is thus a promising approach to mitigating energy impacts as well as other environmental burdens associated with manufacturing and disposal.
Objective straylight assessment of the human eye with a novel device
NASA Astrophysics Data System (ADS)
Schramm, Stefan; Schikowski, Patrick; Lerm, Elena; Kaeding, André; Klemm, Matthias; Haueisen, Jens; Baumgarten, Daniel
2016-03-01
Forward scattered light from the anterior segment of the human eye can be measured by Shack-Hartmann (SH) wavefront aberrometers with limited visual angle. We propose a novel Point Spread Function (PSF) reconstruction algorithm based on SH measurements with a novel measurement devise to overcome these limitations. In our optical setup, we use a Digital Mirror Device as variable field stop, which is conventionally a pinhole suppressing scatter and reflections. Images with 21 different stop diameters were captured and from each image the average subaperture image intensity and the average intensity of the pupil were computed. The 21 intensities represent integral values of the PSF which is consequently reconstructed by derivation with respect to the visual angle. A generalized form of the Stiles-Holladay-approximation is fitted to the PSF resulting in a stray light parameter Log(IS). Additionaly the transmission loss of eye is computed. For the proof of principle, a study on 13 healthy young volunteers was carried out. Scatter filters were positioned in front of the volunteer's eye during C-Quant and scatter measurements to generate straylight emulating scatter in the lens. The straylight parameter is compared to the C-Quant measurement parameter Log(ISC) and scatter density of the filters SDF with a partial correlation. Log(IS) shows significant correlation with the SDF and Log(ISC). The correlation is more prominent between Log(IS) combined with the transmission loss and the SDF and Log(ISC). Our novel measurement and reconstruction technique allow for objective stray light analysis of visual angles up to 4 degrees.
NASA Astrophysics Data System (ADS)
Cubillas, J. E.; Japitana, M.
2016-06-01
This study demonstrates the application of CIELAB, Color intensity, and One Dimensional Scalar Constancy as features for image recognition and classifying benthic habitats in an image with the coastal areas of Hinatuan, Surigao Del Sur, Philippines as the study area. The study area is composed of four datasets, namely: (a) Blk66L005, (b) Blk66L021, (c) Blk66L024, and (d) Blk66L0114. SVM optimization was performed in Matlab® software with the help of Parallel Computing Toolbox to hasten the SVM computing speed. The image used for collecting samples for SVM procedure was Blk66L0114 in which a total of 134,516 sample objects of mangrove, possible coral existence with rocks, sand, sea, fish pens and sea grasses were collected and processed. The collected samples were then used as training sets for the supervised learning algorithm and for the creation of class definitions. The learned hyper-planes separating one class from another in the multi-dimensional feature space can be thought of as a super feature which will then be used in developing the C (classifier) rule set in eCognition® software. The classification results of the sampling site yielded an accuracy of 98.85% which confirms the reliability of remote sensing techniques and analysis employed to orthophotos like the CIELAB, Color Intensity and One dimensional scalar constancy and the use of SVM classification algorithm in classifying benthic habitats.
NASA Astrophysics Data System (ADS)
Seamon, E.; Gessler, P. E.; Flathers, E.
2015-12-01
The creation and use of large amounts of data in scientific investigations has become common practice. Data collection and analysis for large scientific computing efforts are not only increasing in volume as well as number, the methods and analysis procedures are evolving toward greater complexity (Bell, 2009, Clarke, 2009, Maimon, 2010). In addition, the growth of diverse data-intensive scientific computing efforts (Soni, 2011, Turner, 2014, Wu, 2008) has demonstrated the value of supporting scientific data integration. Efforts to bridge this gap between the above perspectives have been attempted, in varying degrees, with modular scientific computing analysis regimes implemented with a modest amount of success (Perez, 2009). This constellation of effects - 1) an increasing growth in the volume and amount of data, 2) a growing data-intensive science base that has challenging needs, and 3) disparate data organization and integration efforts - has created a critical gap. Namely, systems of scientific data organization and management typically do not effectively enable integrated data collaboration or data-intensive science-based communications. Our research efforts attempt to address this gap by developing a modular technology framework for data science integration efforts - with climate variation as the focus. The intention is that this model, if successful, could be generalized to other application areas. Our research aim focused on the design and implementation of a modular, deployable technology architecture for data integration. Developed using aspects of R, interactive python, SciDB, THREDDS, Javascript, and varied data mining and machine learning techniques, the Modular Data Response Framework (MDRF) was implemented to explore case scenarios for bio-climatic variation as they relate to pacific northwest ecosystem regions. Our preliminary results, using historical NETCDF climate data for calibration purposes across the inland pacific northwest region (Abatzoglou, Brown, 2011), show clear ecosystems shifting over a ten-year period (2001-2011), based on multiple supervised classifier methods for bioclimatic indicators.
Design of system calibration for effective imaging
NASA Astrophysics Data System (ADS)
Varaprasad Babu, G.; Rao, K. M. M.
2006-12-01
A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-01
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343
Evaluation of a scattering correction method for high energy tomography
NASA Astrophysics Data System (ADS)
Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel
2018-01-01
One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV - 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.
2008-02-27
between the PHY layer and for example a host PC computer . The PC wants to generate and receive a sequence of data packets. The PC may also want to send...the testbed is quite similar. Given the intense computational requirements of SVD and other matrix mode operations needed to support eigen spreading a...platform for real time operation. This task is probably the major challenge in the development of the testbed. All compute intensive tasks will be
NASA Astrophysics Data System (ADS)
Parker, J. D.; Harada, M.; Hattori, K.; Iwaki, S.; Kabuki, S.; Kishimoto, Y.; Kubo, H.; Kurosawa, S.; Matsuoka, Y.; Miuchi, K.; Mizumoto, T.; Nishimura, H.; Oku, T.; Sawano, T.; Shinohara, T.; Suzuki, J.-I.; Takada, A.; Tanimori, T.; Ueno, K.; Ikeno, M.; Tanaka, M.; Uchida, T.
2014-04-01
The realization of high-intensity, pulsed spallation neutron sources such as J-PARC in Japan and SNS in the US has brought time-of-flight (TOF) based neutron techniques to the fore and spurred the development of new detector technologies. When combined with high-resolution imaging, TOF-based methods become powerful tools for direct imaging of material properties, including crystal structure/internal strain, isotopic/temperature distributions, and internal and external magnetic fields. To carry out such measurements in the high-intensities and high gamma backgrounds found at spallation sources, we have developed a new time-resolved neutron imaging detector employing a micro-pattern gaseous detector known as the micro-pixel chamber (μPIC) coupled with a field-programmable-gate-array-based data acquisition system. The detector combines 100μm-level (σ) spatial and sub-μs time resolutions with low gamma sensitivity of less than 10-12 and a rate capability on the order of Mcps (mega-counts-per-second). Here, we demonstrate the application of our detector to TOF-based techniques with examples of Bragg-edge transmission and neutron resonance transmission imaging (with computed tomography) carried out at J-PARC. We also consider the direct imaging of magnetic fields with our detector using polarized neutrons.
High energy neutron transmission analysis of dry cask storage
NASA Astrophysics Data System (ADS)
Greulich, Christopher; Hughes, Christopher; Gao, Yuan; Enqvist, Andreas; Baciak, James
2017-12-01
Since the U.S. currently only approves of storing used nuclear fuel in pools or dry casks, the demand for dry cask storage is on the rise due to the continuous operation of currently existing nuclear plants which are reaching or have reached the capacity of their used fuel pools. With the rising demand comes additional pressure to ensure the integrity of dry cask systems. Visual inspection is costly and man-power intensive, so alternative nondestructive testing techniques are desired to insure the continued safe and effective storage of fuel. One such approach being investigated by the University of Florida is neutron based computed tomography. Simulations in MCNP are preformed where D-T energy neutrons are transmitted through the dry cask and measured on the opposite side. If the transmitted signal is clear enough, the interior of the cask can be reconstructed from the measurement of the alterations of neutron signal intensity using standard mathematical techniques developed for medical imaging. Preliminary efforts show a correlation between energy and number of scatters (which is an indication of retention of position information). Work is ongoing to quantify if the correlation is strong enough that an energy discriminator may be used as a filter in future image reconstruction. The calculated transmission probability suggests that an image could be reconstructed with a week of scanning.
Conductance of single microRNAs chains related to the autism spectrum disorder
NASA Astrophysics Data System (ADS)
Oliveira, J. I. N.; Albuquerque, E. L.; Fulco, U. L.; Mauriz, P. W.; Sarmento, R. G.; Caetano, E. W. S.; Freire, V. N.
2014-09-01
The charge transport properties of single-stranded microRNAs (miRNAs) chains associated to autism disorder were investigated. The computations were performed within a tight-binding model, together with a transfer matrix technique, with ionization energies and hopping parameters obtained by quantum chemistry method. Current-voltage (I× V) curves of twelve miRNA chains related to the autism spectrum disorders were calculated and analysed. We have obtained both semiconductor and insulator behavior, and a relationship between the current intensity and the autism-related miRNA bases sequencies, suggesting that a kind of electronic biosensor can be developed to distinguish different profiles of autism disorders.
[Rescue cryotherapy for prostate cancer after radiotherapy].
García, Erique Lledó; Amo, Felipe Herranz; San Segundo, Carmen González; Fagundo, Eva Paños; Escudero, Roberto Molina; Alonso, Adrian Husillos; Piniés, Gabriel Ogaya; Rascón, Jose Jara; Fernández, Carlos Hernández
2012-01-01
Radical Radiotherapy constitutes a useful therapeutic option for localized prostate cancer. Almost one third of prostate cancer patients choose this alternative to treat the disease. Despite modifications in the technique as intensity modulation, 3D conformational radiotherapy or computer-assisted brachytherapy, a significant percentage of these patients will show an increase in PSA values after radiation. Local relapse without distant disease and PSA less than 10 ng/ml are candidates for salvage therapy. Cryotherapy has already become a curative treatment option in this group of patients. Recent technological as well as surgical advances in salvage-cryotherapy have reduced dramatically complications and progressively increase the interest on this alternative.
Emergent quantum mechanics without wavefunctions
NASA Astrophysics Data System (ADS)
Mesa Pascasio, J.; Fussy, S.; Schwabl, H.; Grössing, G.
2016-03-01
We present our model of an Emergent Quantum Mechanics which can be characterized by “realism without pre-determination”. This is illustrated by our analytic description and corresponding computer simulations of Bohmian-like “surreal” trajectories, which are obtained classically, i.e. without the use of any quantum mechanical tool such as wavefunctions. However, these trajectories do not necessarily represent ontological paths of particles but rather mappings of the probability density flux in a hydrodynamical sense. Modelling emergent quantum mechanics in a high-low intesity double slit scenario gives rise to the “quantum sweeper effect” with a characteristic intensity pattern. This phenomenon should be experimentally testable via weak measurement techniques.
Serial multiplier arrays for parallel computation
NASA Technical Reports Server (NTRS)
Winters, Kel
1990-01-01
Arrays of systolic serial-parallel multiplier elements are proposed as an alternative to conventional SIMD mesh serial adder arrays for applications that are multiplication intensive and require few stored operands. The design and operation of a number of multiplier and array configurations featuring locality of connection, modularity, and regularity of structure are discussed. A design methodology combining top-down and bottom-up techniques is described to facilitate development of custom high-performance CMOS multiplier element arrays as well as rapid synthesis of simulation models and semicustom prototype CMOS components. Finally, a differential version of NORA dynamic circuits requiring a single-phase uncomplemented clock signal introduced for this application.
NASA Astrophysics Data System (ADS)
Wren, Stephen P.; Piletsky, Sergey A.; Karim, Kal; Gascoine, Paul; Lacey, Richard; Sun, Tong; Grattan, Kenneth T. V.
2014-05-01
Previously, we have developed chemical sensors using fibre optic-based techniques for the detection of Cocaine, utilising molecularly imprinted polymers (MIPs) containing fluorescein moieties as the signalling groups. Here, we report the computational design of a fluorophore which was incorporated into a MIP for the generation of a novel sensor that offers improved sensitivity for Cocaine with a detection range of 1-100μM. High selectivity for Cocaine over a suite of known Cocaine interferants (25μM) was also demonstrated by measuring changes in the intensity of fluorescence signals received from the sensor.
Filtered epithermal quasi-monoenergetic neutron beams at research reactor facilities.
Mansy, M S; Bashter, I I; El-Mesiry, M S; Habib, N; Adib, M
2015-03-01
Filtered neutron techniques were applied to produce quasi-monoenergetic neutron beams in the energy range of 1.5-133keV at research reactors. A simulation study was performed to characterize the filter components and transmitted beam lines. The filtered beams were characterized in terms of the optimal thickness of the main and additive components. The filtered neutron beams had high purity and intensity, with low contamination from the accompanying thermal emission, fast neutrons and γ-rays. A computer code named "QMNB" was developed in the "MATLAB" programming language to perform the required calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Aircraft symmetric flight optimization. [gradient techniques for supersonic aircraft control
NASA Technical Reports Server (NTRS)
Falco, M.; Kelley, H. J.
1973-01-01
Review of the development of gradient techniques and their application to aircraft optimal performance computations in the vertical plane of flight. Results obtained using the method of gradients are presented for attitude- and throttle-control programs which extremize the fuel, range, and time performance indices subject to various trajectory and control constraints, including boundedness of engine throttle control. A penalty function treatment of state inequality constraints which generally appear in aircraft performance problems is outlined. Numerical results for maximum-range, minimum-fuel, and minimum-time climb paths for a hypothetical supersonic turbojet interceptor are presented and discussed. In addition, minimum-fuel climb paths subject to various levels of ground overpressure intensity constraint are indicated for a representative supersonic transport. A variant of the Gel'fand-Tsetlin 'method of ravines' is reviewed, and two possibilities for further development of continuous gradient processes are cited - namely, a projection version of conjugate gradients and a curvilinear search.
The Fringe-Imaging Skin Friction Technique PC Application User's Manual
NASA Technical Reports Server (NTRS)
Zilliac, Gregory G.
1999-01-01
A personal computer application (CXWIN4G) has been written which greatly simplifies the task of extracting skin friction measurements from interferograms of oil flows on the surface of wind tunnel models. Images are first calibrated, using a novel approach to one-camera photogrammetry, to obtain accurate spatial information on surfaces with curvature. As part of the image calibration process, an auxiliary file containing the wind tunnel model geometry is used in conjunction with a two-dimensional direct linear transformation to relate the image plane to the physical (model) coordinates. The application then applies a nonlinear regression model to accurately determine the fringe spacing from interferometric intensity records as required by the Fringe Imaging Skin Friction (FISF) technique. The skin friction is found through application of a simple expression that makes use of lubrication theory to relate fringe spacing to skin friction.
Automated detection of microcalcification clusters in mammograms
NASA Astrophysics Data System (ADS)
Karale, Vikrant A.; Mukhopadhyay, Sudipta; Singh, Tulika; Khandelwal, Niranjan; Sadhu, Anup
2017-03-01
Mammography is the most efficient modality for detection of breast cancer at early stage. Microcalcifications are tiny bright spots in mammograms and can often get missed by the radiologist during diagnosis. The presence of microcalcification clusters in mammograms can act as an early sign of breast cancer. This paper presents a completely automated computer-aided detection (CAD) system for detection of microcalcification clusters in mammograms. Unsharp masking is used as a preprocessing step which enhances the contrast between microcalcifications and the background. The preprocessed image is thresholded and various shape and intensity based features are extracted. Support vector machine (SVM) classifier is used to reduce the false positives while preserving the true microcalcification clusters. The proposed technique is applied on two different databases i.e DDSM and private database. The proposed technique shows good sensitivity with moderate false positives (FPs) per image on both databases.
Fluorescence optical imaging in anticancer drug delivery.
Etrych, Tomáš; Lucas, Henrike; Janoušková, Olga; Chytil, Petr; Mueller, Thomas; Mäder, Karsten
2016-03-28
In the past several decades, nanosized drug delivery systems with various targeting functions and controlled drug release capabilities inside targeted tissues or cells have been intensively studied. Understanding their pharmacokinetic properties is crucial for the successful transition of this research into clinical practice. Among others, fluorescence imaging has become one of the most commonly used imaging tools in pre-clinical research. The development of increasing numbers of suitable fluorescent dyes excitable in the visible to near-infrared wavelengths of the spectrum has significantly expanded the applicability of fluorescence imaging. This paper focuses on the potential applications and limitations of non-invasive imaging techniques in the field of drug delivery, especially in anticancer therapy. Fluorescent imaging at both the cellular and systemic levels is discussed in detail. Additionally, we explore the possibility for simultaneous treatment and imaging using theranostics and combinations of different imaging techniques, e.g., fluorescence imaging with computed tomography. Copyright © 2016 Elsevier B.V. All rights reserved.
Neurodiagnostic techniques in neonatal critical care.
Chang, Taeun; du Plessis, Adre
2012-04-01
This article reviews recent advances in the neurodiagnostic tools available to clinicians practicing in neonatal critical care. The advent of induced mild hypothermia for acute neonatal hypoxic-ischemic encephalopathy in 2005 has been responsible for renewed urgency in the development of precise and reliable neonatal neurodiagnostic techniques. Traditional evaluations of bedside head ultrasounds, head computed tomography scans, and routine electroencephalograms (EEGs) have been upgraded in most tertiary pediatric centers to incorporate protocols for MRI, continuous EEG monitoring with remote bedside access, amplitude-integrated EEG, and near-infrared spectroscopy. Meanwhile, recent studies supporting the association between placental pathology and neonatal brain injury highlight the need for closer examination of the placenta in the neurodiagnostic evaluation of the acutely ill newborn. As the pursuit of more effective neuroprotection moves into the "hypothermia plus" era, the identification, evaluation, and treatment of the neurologically affected newborn in the neonatal intensive care unit has increasing significance.
The set of commercially available chemical substances in commerce that may have significant global warming potential (GWP) is not well defined. Although there are currently over 200 chemicals with high GWP reported by the Intergovernmental Panel on Climate Change, World Meteorological Organization, or Environmental Protection Agency, there may be hundreds of additional chemicals that may also have significant GWP. Evaluation of various approaches to estimate radiative efficiency (RE) and atmospheric lifetime will help to refine GWP estimates for compounds where no measured IR spectrum is available. This study compares values of RE calculated using computational chemistry techniques for 235 chemical compounds against the best available values. It is important to assess the reliability of the underlying computational methods for computing RE to understand the sources of deviations from the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models. The values derived using these models are found to be in reasonable agreement with reported RE values (though significant improvement is obtained through scaling). The effect of varying the computational method and basis set used to calculate the frequency data is also discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed values of RE in this study. Deviations of
Gregoretti, Francesco; Belcastro, Vincenzo; di Bernardo, Diego; Oliva, Gennaro
2010-04-21
The reverse engineering of gene regulatory networks using gene expression profile data has become crucial to gain novel biological knowledge. Large amounts of data that need to be analyzed are currently being produced due to advances in microarray technologies. Using current reverse engineering algorithms to analyze large data sets can be very computational-intensive. These emerging computational requirements can be met using parallel computing techniques. It has been shown that the Network Identification by multiple Regression (NIR) algorithm performs better than the other ready-to-use reverse engineering software. However it cannot be used with large networks with thousands of nodes--as is the case in biological networks--due to the high time and space complexity. In this work we overcome this limitation by designing and developing a parallel version of the NIR algorithm. The new implementation of the algorithm reaches a very good accuracy even for large gene networks, improving our understanding of the gene regulatory networks that is crucial for a wide range of biomedical applications.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
NASA Technical Reports Server (NTRS)
Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.
1996-01-01
Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model
NASA Astrophysics Data System (ADS)
McEwan, Alistair; van Schaik, André
2003-12-01
The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a) rate level functions for onset and steady-state response, (b) recovery after masking, (c) additivity, (d) two-component adaptation, (e) phase locking, (f) recovery of spontaneous activity, and (g) computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.
Integrating Intelligent Systems Domain Knowledge Into the Earth Science Curricula
NASA Astrophysics Data System (ADS)
Güereque, M.; Pennington, D. D.; Pierce, S. A.
2017-12-01
High-volume heterogeneous datasets are becoming ubiquitous, migrating to center stage over the last ten years and transcending the boundaries of computationally intensive disciplines into the mainstream, becoming a fundamental part of every science discipline. Despite the fact that large datasets are now pervasive across industries and academic disciplines, the array of skills is generally absent from earth science programs. This has left the bulk of the student population without access to curricula that systematically teach appropriate intelligent-systems skills, creating a void for skill sets that should be universal given their need and marketability. While some guidance regarding appropriate computational thinking and pedagogy is appearing, there exist few examples where these have been specifically designed and tested within the earth science domain. Furthermore, best practices from learning science have not yet been widely tested for developing intelligent systems-thinking skills. This research developed and tested evidence based computational skill modules that target this deficit with the intention of informing the earth science community as it continues to incorporate intelligent systems techniques and reasoning into its research and classrooms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; et al.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
Frequency and associated risk factors for neck pain among software engineers in Karachi, Pakistan.
Rasim Ul Hasanat, Mohammad; Ali, Syed Shahzad; Rasheed, Abdur; Khan, Muhammad
2017-07-01
To determine the frequency of neck pain and its association with risk factors among software engineers. This descriptive, cross-sectional study was conducted at the Dow University of Health Sciences, Karachi, from February to March 2016, and comprised software engineers from 19 different locations. Non-probability purposive sampling technique was used to select individuals spending at least 6 hours in front of computer screens every day and having a work experience of at least 6 months. Data were collected using a self-administrable questionnaire. SPSS 21 was used for data analysis. Of the 185 participants, 49(26.5%) had neck pain at the time of data-gathering, while 136(73.5%) reported no pain. However, 119(64.32%) participants had a previous history of neck pain. Other factors like smoking, physical inactivity, history of any muscular pain and neck pain, uncomfortable workstation, and work-related mental stress and insufficient sleep at night, were found to be significantly associated with current neck pain (p<0.05 each). Intensive computer users are likely to experience at least one episode of computer-associated neck pain.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
A system is safety-critical if its failure can endanger human life or cause significant damage to property or the environment. State-of-the-art computer systems on commercial aircraft are highly complex, software-intensive, functionally integrated, and network-centric systems of systems. Ensuring that such systems are safe and comply with existing safety regulations is costly and time-consuming as the level of rigor in the development process, especially the validation and verification activities, is determined by considerations of system complexity and safety criticality. A significant degree of care and deep insight into the operational principles of these systems is required to ensure adequate coverage of all design implications relevant to system safety. Model-based development methodologies, methods, tools, and techniques facilitate collaboration and enable the use of common design artifacts among groups dealing with different aspects of the development of a system. This paper examines the application of model-based development to complex and safety-critical aircraft computer systems. Benefits and detriments are identified and an overall assessment of the approach is given.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
Complex Spiral Structure in the HD 100546 Transitional Disk as Revealed by GPI and MagAO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follette, Katherine B.; Macintosh, Bruce; Mullen, Wyatt
We present optical and near-infrared high-contrast images of the transitional disk HD 100546 taken with the Magellan Adaptive Optics system (MagAO) and the Gemini Planet Imager (GPI). GPI data include both polarized intensity and total intensity imagery, and MagAO data are taken in Simultaneous Differential Imaging mode at H α . The new GPI H -band total intensity data represent a significant enhancement in sensitivity and field rotation compared to previous data sets and enable a detailed exploration of substructure in the disk. The data are processed with a variety of differential imaging techniques (polarized, angular, reference, and simultaneous differentialmore » imaging) in an attempt to identify the disk structures that are most consistent across wavelengths, processing techniques, and algorithmic parameters. The inner disk cavity at 15 au is clearly resolved in multiple data sets, as are a variety of spiral features. While the cavity and spiral structures are identified at levels significantly distinct from the neighboring regions of the disk under several algorithms and with a range of algorithmic parameters, emission at the location of HD 100546 “ c ” varies from point-like under aggressive algorithmic parameters to a smooth continuous structure with conservative parameters, and is consistent with disk emission. Features identified in the HD 100546 disk bear qualitative similarity to computational models of a moderately inclined two-armed spiral disk, where projection effects and wrapping of the spiral arms around the star result in a number of truncated spiral features in forward-modeled images.« less
Electron transport theory in magnetic nanostructures
NASA Astrophysics Data System (ADS)
Choy, Tat-Sang
Magnetic nanostructure has been a new trend because of its application in making magnetic sensors, magnetic memories, and magnetic reading heads in hard disks drives. Although a variety of nanostructures have been realized in experiments in recent years by innovative sample growth techniques, the theoretical study of these devices remain a challenge. On one hand, atomic scale modeling is often required for studying the magnetic nanostructures; on the other, these structures often have a dimension on the order of one micrometer, which makes the calculation numerically intensive. In this work, we have studied the electron transport theory in magnetic nanostructures, with special attention to the giant magnetoresistance (GMR) structure. We have developed a model that includes the details of the band structure and disorder, both of which are both important in obtaining the conductivity. We have also developed an efficient algorithm to compute the conductivity in magnetic nanostructures. The model and the algorithm are general and can be applied to complicated structures. We have applied the theory to current-perpendicular-to-plane GMR structures and the results agree with experiments. Finally, we have searched for the atomic configuration with the highest GMR using the simulated annealing algorithm. This method is computationally intensive because we have to compute the GMR for 103 to 104 configurations. However it is still very efficient because the number of steps it takes to find the maximum is much smaller than the number of all possible GMR structures. We found that ultra-thin NiCu superlattices have surprisingly large GMR even at the moderate disorder in experiments. This finding may be useful in improving the GMR technology.
Luminescence properties of Eu3+ doped CdF2 single crystals
NASA Astrophysics Data System (ADS)
Boubekri, H.; Diaf, M.; Guerbous, L.; Jouart, J. P.
2018-04-01
This paper reports the photoluminescence properties of Eu3+ doped CdF2 single crystals. The pulled crystals were prepared by use of the Bridgman technique from a vacuum furnace in fluoride atmosphere. Absorption, excitation and emission spectra of the crystal doped with three Eu3+ concentrations (0.02%, 0.1% and 0.6% mol.) were recorded at room temperature. The emission spectra exhibit a strong yellow and red emissions in the spectral range 550-720 nm which are assigned to 5D0 → 7FJ (J = 1, 2, 4) transitions and a weak infrared emission around 816 nm corresponding to 5D0 → 7F6 transition. The magnetic dipole emission (5D0 → 7F1) is the most intense for each Eu3+ concentration. The Judd-Ofelt intensity parameters Ω2, Ω4, Ω6 for 4f-4f transitions of Eu3+ ions were computed from the emission spectra using the 5D0 → 7FJ (J = 1, 2, 4, 6) transitions. Via these phenomenological intensity parameters, the spontaneous emission probabilities, branching ratios, radiative lifetimes, quantum efficiencies and emission cross-sections for the main Eu3+ emitting levels are evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yi; Zhang Jingtao; Xu Zhizhan
2010-07-15
The exact algebraic solution recently obtained by Guo, Wu, and Van Woerkom (Phys. Rev. A 73 (2006) 023419) made possible accurate calculations of quasienergies of a driven two-level atom with an arbitrary original energy spacing and laser intensity. Due to the complication of the analytic solutions that involves an infinite number of infinite determinants, many mathematical difficulties must be overcome to obtain precise values of quasienergies. In this paper, with a further developed algebraic method, we show how to solve the computational problem completely and results are presented in a data table. With this table, one can easily obtain allmore » quasienergies of a driven two-level atom with an arbitrary original energy spacing and arbitrary intensity and frequency of the driving laser. The numerical solution technique developed here can be applied to the calculation of Freeman resonances in photoelectron energy spectra. As an example for applications, we show how to use the data table to calculate the peak laser intensity at which a Freeman resonance occurs in the transition between the ground Xe 5p P{sub 3/2} state and the Rydberg state Xe 8p P{sub 3/2}.« less
Automatic registration of optical imagery with 3d lidar data using local combined mutual information
NASA Astrophysics Data System (ADS)
Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.
2013-10-01
Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.
Experiment in Onboard Synthetic Aperture Radar Data Processing
NASA Technical Reports Server (NTRS)
Holland, Matthew
2011-01-01
Single event upsets (SEUs) are a threat to any computing system running on hardware that has not been physically radiation hardened. In addition to mandating the use of performance-limited, hardened heritage equipment, prior techniques for dealing with the SEU problem often involved hardware-based error detection and correction (EDAC). With limited computing resources, software- based EDAC, or any more elaborate recovery methods, were often not feasible. Synthetic aperture radars (SARs), when operated in the space environment, are interesting due to their relevance to NASAs objectives, but problematic in the sense of producing prodigious amounts of raw data. Prior implementations of the SAR data processing algorithm have been too slow, too computationally intensive, and require too much application memory for onboard execution to be a realistic option when using the type of heritage processing technology described above. This standard C-language implementation of SAR data processing is distributed over many cores of a Tilera Multicore Processor, and employs novel Radiation Hardening by Software (RHBS) techniques designed to protect the component processes (one per core) and their shared application memory from the sort of SEUs expected in the space environment. The source code includes calls to Tilera APIs, and a specialized Tilera compiler is required to produce a Tilera executable. The compiled application reads input data describing the position and orientation of a radar platform, as well as its radar-burst data, over time and writes out processed data in a form that is useful for analysis of the radar observations.
NASA Astrophysics Data System (ADS)
Groch, A.; Seitel, A.; Hempel, S.; Speidel, S.; Engelbrecht, R.; Penne, J.; Höller, K.; Röhl, S.; Yung, K.; Bodenstedt, S.; Pflaum, F.; dos Santos, T. R.; Mersmann, S.; Meinzer, H.-P.; Hornegger, J.; Maier-Hein, L.
2011-03-01
One of the main challenges related to computer-assisted laparoscopic surgery is the accurate registration of pre-operative planning images with patient's anatomy. One popular approach for achieving this involves intraoperative 3D reconstruction of the target organ's surface with methods based on multiple view geometry. The latter, however, require robust and fast algorithms for establishing correspondences between multiple images of the same scene. Recently, the first endoscope based on Time-of-Flight (ToF) camera technique was introduced. It generates dense range images with high update rates by continuously measuring the run-time of intensity modulated light. While this approach yielded promising results in initial experiments, the endoscopic ToF camera has not yet been evaluated in the context of related work. The aim of this paper was therefore to compare its performance with different state-of-the-art surface reconstruction methods on identical objects. For this purpose, surface data from a set of porcine organs as well as organ phantoms was acquired with four different cameras: a novel Time-of-Flight (ToF) endoscope, a standard ToF camera, a stereoscope, and a High Definition Television (HDTV) endoscope. The resulting reconstructed partial organ surfaces were then compared to corresponding ground truth shapes extracted from computed tomography (CT) data using a set of local and global distance metrics. The evaluation suggests that the ToF technique has high potential as means for intraoperative endoscopic surface registration.
Aortic annulus sizing using watershed transform and morphological approach for CT images
NASA Astrophysics Data System (ADS)
Mohammad, Norhasmira; Omar, Zaid; Sahrim, Mus'ab
2018-02-01
Aortic valve disease occurs due to calcification deposits on the area of leaflets within the human heart. It is progressive over time where it can affect the mechanism of the heart valve. To avoid the risk of surgery for vulnerable patients especially senior citizens, a new method has been introduced: Transcatheter Aortic Valve Implantation (TAVI), which places a synthetic catheter within the patient's valve. This entails a procedure of aortic annulus sizing, which requires manual measurement of the scanned images acquired from Computed Tomographic (CT) by experts. The step requires intensive efforts, though human error may still eventually lead to false measurement. In this research, image processing techniques are implemented onto cardiac CT images to achieve an automated and accurate measurement of the heart annulus. The image is first put through pre-processing for noise filtration and image enhancement. Then, a marker image is computed using the combination of opening and closing operations where the foreground image is marked as a feature while the background image is set to zero. Marker image is used to control the watershed transformation and also to prevent oversegmentation. This transformation has the advantage of fast computational and oversegmentation problems, which usually appear with the watershed transform can be solved with the introduction of marker image. Finally, the measurement of aortic annulus from the image data is obtained through morphological operations. Results affirm the approach's ability to achieve accurate annulus measurements compared to conventional techniques.
Differences in muscle load between computer and non-computer work among office workers.
Richter, J M; Mathiassen, S E; Slijper, H P; Over, E A B; Frens, M A
2009-12-01
Introduction of more non-computer tasks has been suggested to increase exposure variation and thus reduce musculoskeletal complaints (MSC) in computer-intensive office work. This study investigated whether muscle activity did, indeed, differ between computer and non-computer activities. Whole-day logs of input device use in 30 office workers were used to identify computer and non-computer work, using a range of classification thresholds (non-computer thresholds (NCTs)). Exposure during these activities was assessed by bilateral electromyography recordings from the upper trapezius and lower arm. Contrasts in muscle activity between computer and non-computer work were distinct but small, even at the individualised, optimal NCT. Using an average group-based NCT resulted in less contrast, even in smaller subgroups defined by job function or MSC. Thus, computer activity logs should be used cautiously as proxies of biomechanical exposure. Conventional non-computer tasks may have a limited potential to increase variation in muscle activity during computer-intensive office work.
Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
2012-09-25
Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jun
Our group has been working with ANL collaborators on the topic bridging the gap between parallel file system and local file system during the course of this project period. We visited Argonne National Lab -- Dr. Robert Ross's group for one week in the past summer 2007. We looked over our current project progress and planned the activities for the incoming years 2008-09. The PI met Dr. Robert Ross several times such as HEC FSIO workshop 08, SC08 and SC10. We explored the opportunities to develop a production system by leveraging our current prototype to (SOGP+PVFS) a new PVFS version.more » We delivered SOGP+PVFS codes to ANL PVFS2 group in 2008.We also talked about exploring a potential project on developing new parallel programming models and runtime systems for data-intensive scalable computing (DISC). The methodology is to evolve MPI towards DISC by incorporating some functions of Google MapReduce parallel programming model. More recently, we are together exploring how to leverage existing works to perform (1) coordination/aggregation of local I/O operations prior to movement over the WAN, (2) efficient bulk data movement over the WAN, (3) latency hiding techniques for latency-intensive operations. Since 2009, we start applying Hadoop/MapReduce to some HEC applications with LANL scientists John Bent and Salman Habib. Another on-going work is to improve checkpoint performance at I/O forwarding Layer for the Road Runner super computer with James Nuetz and Gary Gridder at LANL. Two senior undergraduates from our research group did summer internships about high-performance file and storage system projects in LANL since 2008 for consecutive three years. Both of them are now pursuing Ph.D. degree in our group and will be 4th year in the PhD program in Fall 2011 and go to LANL to advance two above-mentioned works during this winter break. Since 2009, we have been collaborating with several computer scientists (Gary Grider, John bent, Parks Fields, James Nunez, Hsing-Bung Chen, etc) from HPC5 and James Ahrens from Advanced Computing Laboratory in Los Alamos National Laboratory. We hold a weekly conference and/or video meeting on advancing works at two fronts: the hardware/software infrastructure of building large-scale data intensive cluster and research publications. Our group members assist in constructing several onsite LANL data intensive clusters. Two parties have been developing software codes and research papers together using both sides resources.« less
Probability mapping of scarred myocardium using texture and intensity features in CMR images
2013-01-01
Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280
Conceptual Design Oriented Wing Structural Analysis and Optimization
NASA Technical Reports Server (NTRS)
Lau, May Yuen
1996-01-01
Airplane optimization has always been the goal of airplane designers. In the conceptual design phase, a designer's goal could be tradeoffs between maximum structural integrity, minimum aerodynamic drag, or maximum stability and control, many times achieved separately. Bringing all of these factors into an iterative preliminary design procedure was time consuming, tedious, and not always accurate. For example, the final weight estimate would often be based upon statistical data from past airplanes. The new design would be classified based on gross characteristics, such as number of engines, wingspan, etc., to see which airplanes of the past most closely resembled the new design. This procedure works well for conventional airplane designs, but not very well for new innovative designs. With the computing power of today, new methods are emerging for the conceptual design phase of airplanes. Using finite element methods, computational fluid dynamics, and other computer techniques, designers can make very accurate disciplinary-analyses of an airplane design. These tools are computationally intensive, and when used repeatedly, they consume a great deal of computing time. In order to reduce the time required to analyze a design and still bring together all of the disciplines (such as structures, aerodynamics, and controls) into the analysis, simplified design computer analyses are linked together into one computer program. These design codes are very efficient for conceptual design. The work in this thesis is focused on a finite element based conceptual design oriented structural synthesis capability (CDOSS) tailored to be linked into ACSYNT.
An enhanced mobile-healthcare emergency system based on extended chaotic maps.
Lee, Cheng-Chi; Hsu, Che-Wei; Lai, Yan-Ming; Vasilakos, Athanasios
2013-10-01
Mobile Healthcare (m-Healthcare) systems, namely smartphone applications of pervasive computing that utilize wireless body sensor networks (BSNs), have recently been proposed to provide smartphone users with health monitoring services and received great attentions. An m-Healthcare system with flaws, however, may leak out the smartphone user's personal information and cause security, privacy preservation, or user anonymity problems. In 2012, Lu et al. proposed a secure and privacy-preserving opportunistic computing (SPOC) framework for mobile-Healthcare emergency. The brilliant SPOC framework can opportunistically gather resources on the smartphone such as computing power and energy to process the computing-intensive personal health information (PHI) in case of an m-Healthcare emergency with minimal privacy disclosure. To balance between the hazard of PHI privacy disclosure and the necessity of PHI processing and transmission in m-Healthcare emergency, in their SPOC framework, Lu et al. introduced an efficient user-centric privacy access control system which they built on the basis of an attribute-based access control mechanism and a new privacy-preserving scalar product computation (PPSPC) technique. However, we found out that Lu et al.'s protocol still has some secure flaws such as user anonymity and mutual authentication. To fix those problems and further enhance the computation efficiency of Lu et al.'s protocol, in this article, the authors will present an improved mobile-Healthcare emergency system based on extended chaotic maps. The new system is capable of not only providing flawless user anonymity and mutual authentication but also reducing the computation cost.
An improved optical flow tracking technique for real-time MR-guided beam therapies in moving organs
NASA Astrophysics Data System (ADS)
Zachiu, C.; Papadakis, N.; Ries, M.; Moonen, C.; de Senneville, B. Denis
2015-12-01
Magnetic resonance (MR) guided high intensity focused ultrasound and external beam radiotherapy interventions, which we shall refer to as beam therapies/interventions, are promising techniques for the non-invasive ablation of tumours in abdominal organs. However, therapeutic energy delivery in these areas becomes challenging due to the continuous displacement of the organs with respiration. Previous studies have addressed this problem by coupling high-framerate MR-imaging with a tracking technique based on the algorithm proposed by Horn and Schunck (H and S), which was chosen due to its fast convergence rate and highly parallelisable numerical scheme. Such characteristics were shown to be indispensable for the real-time guidance of beam therapies. In its original form, however, the algorithm is sensitive to local grey-level intensity variations not attributed to motion such as those that occur, for example, in the proximity of pulsating arteries. In this study, an improved motion estimation strategy which reduces the impact of such effects is proposed. Displacements are estimated through the minimisation of a variation of the H and S functional for which the quadratic data fidelity term was replaced with a term based on the linear L1norm, resulting in what we have called an L2-L1 functional. The proposed method was tested in the livers and kidneys of two healthy volunteers under free-breathing conditions, on a data set comprising 3000 images equally divided between the volunteers. The results show that, compared to the existing approaches, our method demonstrates a greater robustness to local grey-level intensity variations introduced by arterial pulsations. Additionally, the computational time required by our implementation make it compatible with the work-flow of real-time MR-guided beam interventions. To the best of our knowledge this study was the first to analyse the behaviour of an L1-based optical flow functional in an applicative context: real-time MR-guidance of beam therapies in moving organs.
Seruya, Mitchel; Fisher, Mark; Rodriguez, Eduardo D
2013-11-01
There has been rising interest in computer-aided design/computer-aided manufacturing for preoperative planning and execution of osseous free flap reconstruction. The purpose of this study was to compare outcomes between computer-assisted and conventional fibula free flap techniques for craniofacial reconstruction. A two-center, retrospective review was carried out on patients who underwent fibula free flap surgery for craniofacial reconstruction from 2003 to 2012. Patients were categorized by the type of reconstructive technique: conventional (between 2003 and 2009) or computer-aided design/computer-aided manufacturing (from 2010 to 2012). Demographics, surgical factors, and perioperative and long-term outcomes were compared. A total of 68 patients underwent microsurgical craniofacial reconstruction: 58 conventional and 10 computer-aided design and manufacturing fibula free flaps. By demographics, patients undergoing the computer-aided design/computer-aided manufacturing method were significantly older and had a higher rate of radiotherapy exposure compared with conventional patients. Intraoperatively, the median number of osteotomies was significantly higher (2.0 versus 1.0, p=0.002) and the median ischemia time was significantly shorter (120 minutes versus 170 minutes, p=0.004) for the computer-aided design/computer-aided manufacturing technique compared with conventional techniques; operative times were shorter for patients undergoing the computer-aided design/computer-aided manufacturing technique, although this did not reach statistical significance. Perioperative and long-term outcomes were equivalent for the two groups, notably, hospital length of stay, recipient-site infection, partial and total flap loss, and rate of soft-tissue and bony tissue revisions. Microsurgical craniofacial reconstruction using a computer-assisted fibula flap technique yielded significantly shorter ischemia times amidst a higher number of osteotomies compared with conventional techniques. Therapeutic, III.
Quad-rotor flight path energy optimization
NASA Astrophysics Data System (ADS)
Kemper, Edward
Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.
Yu, Hua-Gen
2015-01-28
We report a rigorous full dimensional quantum dynamics algorithm, the multi-layer Lanczos method, for computing vibrational energies and dipole transition intensities of polyatomic molecules without any dynamics approximation. The multi-layer Lanczos method is developed by using a few advanced techniques including the guided spectral transform Lanczos method, multi-layer Lanczos iteration approach, recursive residue generation method, and dipole-wavefunction contraction. The quantum molecular Hamiltonian at the total angular momentum J = 0 is represented in a set of orthogonal polyspherical coordinates so that the large amplitude motions of vibrations are naturally described. In particular, the algorithm is general and problem-independent. An applicationmore » is illustrated by calculating the infrared vibrational dipole transition spectrum of CH₄ based on the ab initio T8 potential energy surface of Schwenke and Partridge and the low-order truncated ab initio dipole moment surfaces of Yurchenko and co-workers. A comparison with experiments is made. The algorithm is also applicable for Raman polarizability active spectra.« less
NASA Astrophysics Data System (ADS)
Rebelo, Marina de Sá; Aarre, Ann Kirstine Hummelgaard; Clemmesen, Karen-Louise; Brandão, Simone Cristina Soares; Giorgi, Maria Clementina; Meneghetti, José Cláudio; Gutierrez, Marco Antonio
2009-12-01
A method to compute three-dimension (3D) left ventricle (LV) motion and its color coded visualization scheme for the qualitative analysis in SPECT images is proposed. It is used to investigate some aspects of Cardiac Resynchronization Therapy (CRT). The method was applied to 3D gated-SPECT images sets from normal subjects and patients with severe Idiopathic Heart Failure, before and after CRT. Color coded visualization maps representing the LV regional motion showed significant difference between patients and normal subjects. Moreover, they indicated a difference between the two groups. Numerical results of regional mean values representing the intensity and direction of movement in radial direction are presented. A difference of one order of magnitude in the intensity of the movement on patients in relation to the normal subjects was observed. Quantitative and qualitative parameters gave good indications of potential application of the technique to diagnosis and follow up of patients submitted to CRT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pursley, Jennifer, E-mail: jpursley@mgh.harvard.edu; Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA; Damato, Antonio L.
The purpose of this study was to investigate class solutions using RapidArc volumetric-modulated arc therapy (VMAT) planning for ipsilateral and bilateral head and neck (H&N) irradiation, and to compare dosimetric results with intensity-modulated radiotherapy (IMRT) plans. A total of 14 patients who received ipsilateral and 10 patients who received bilateral head and neck irradiation were retrospectively replanned with several volumetric-modulated arc therapy techniques. For ipsilateral neck irradiation, the volumetric-modulated arc therapy techniques included two 360° arcs, two 360° arcs with avoidance sectors around the contralateral parotid, two 260° or 270° arcs, and two 210° arcs. For bilateral neck irradiation, themore » volumetric-modulated arc therapy techniques included two 360° arcs, two 360° arcs with avoidance sectors around the shoulders, and 3 arcs. All patients had a sliding-window-delivery intensity-modulated radiotherapy plan that was used as the benchmark for dosimetric comparison. For ipsilateral neck irradiation, a volumetric-modulated arc therapy technique using two 360° arcs with avoidance sectors around the contralateral parotid was dosimetrically comparable to intensity-modulated radiotherapy, with improved conformity (conformity index = 1.22 vs 1.36, p < 0.04) and lower contralateral parotid mean dose (5.6 vs 6.8 Gy, p < 0.03). For bilateral neck irradiation, 3-arc volumetric-modulated arc therapy techniques were dosimetrically comparable to intensity-modulated radiotherapy while also avoiding irradiation through the shoulders. All volumetric-modulated arc therapy techniques required fewer monitor units than sliding-window intensity-modulated radiotherapy to deliver treatment, with an average reduction of 35% for ipsilateral plans and 67% for bilateral plans. Thus, for ipsilateral head and neck irradiation a volumetric-modulated arc therapy technique using two 360° arcs with avoidance sectors around the contralateral parotid is recommended. For bilateral neck irradiation, 2- or 3-arc techniques are dosimetrically comparable to intensity-modulated radiotherapy, but more work is needed to determine the optimal approaches by disease site.« less
Pursley, Jennifer; Damato, Antonio L; Czerminska, Maria A; Margalit, Danielle N; Sher, David J; Tishler, Roy B
2017-01-01
The purpose of this study was to investigate class solutions using RapidArc volumetric-modulated arc therapy (VMAT) planning for ipsilateral and bilateral head and neck (H&N) irradiation, and to compare dosimetric results with intensity-modulated radiotherapy (IMRT) plans. A total of 14 patients who received ipsilateral and 10 patients who received bilateral head and neck irradiation were retrospectively replanned with several volumetric-modulated arc therapy techniques. For ipsilateral neck irradiation, the volumetric-modulated arc therapy techniques included two 360° arcs, two 360° arcs with avoidance sectors around the contralateral parotid, two 260° or 270° arcs, and two 210° arcs. For bilateral neck irradiation, the volumetric-modulated arc therapy techniques included two 360° arcs, two 360° arcs with avoidance sectors around the shoulders, and 3 arcs. All patients had a sliding-window-delivery intensity-modulated radiotherapy plan that was used as the benchmark for dosimetric comparison. For ipsilateral neck irradiation, a volumetric-modulated arc therapy technique using two 360° arcs with avoidance sectors around the contralateral parotid was dosimetrically comparable to intensity-modulated radiotherapy, with improved conformity (conformity index = 1.22 vs 1.36, p < 0.04) and lower contralateral parotid mean dose (5.6 vs 6.8Gy, p < 0.03). For bilateral neck irradiation, 3-arc volumetric-modulated arc therapy techniques were dosimetrically comparable to intensity-modulated radiotherapy while also avoiding irradiation through the shoulders. All volumetric-modulated arc therapy techniques required fewer monitor units than sliding-window intensity-modulated radiotherapy to deliver treatment, with an average reduction of 35% for ipsilateral plans and 67% for bilateral plans. Thus, for ipsilateral head and neck irradiation a volumetric-modulated arc therapy technique using two 360° arcs with avoidance sectors around the contralateral parotid is recommended. For bilateral neck irradiation, 2- or 3-arc techniques are dosimetrically comparable to intensity-modulated radiotherapy, but more work is needed to determine the optimal approaches by disease site. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Moon, Jong Hoon; Jung, Jin-Hwa; Won, Young Sik; Cho, Hwi-Young
2017-01-01
[Purpose] The purpose of this study was to analyze the effect of Graston Technique on hamstring extensibility and pain intensity in patients with nonspecific low back pain. [Subjects and Methods] Twenty-four patients with nonspecific low back pain (27–46 years of age) enrolled in the study. All participants were randomly assigned to one of two groups: Graston technique group (n=12) and a static stretching group (n=12). The Graston Technique was used on the hamstring muscles of the experimental group, while the static stretching group performed static stretching. Hamstring extensibility was recorded using the sit and reach test, and a visual analog scale was used to measure pain intensity. [Results] Both groups showed a significant improvement after intervention. In comparison to the static stretching group, the Graston technique group had significantly more improvement in hamstring extensibility. [Conclusion] The Graston Technique is a simple and effective intervention in nonspecific low back pain patients to improve hamstring extensibility and lower pain intensity, and it would be beneficial in clinical practice. PMID:28265144
Moon, Jong Hoon; Jung, Jin-Hwa; Won, Young Sik; Cho, Hwi-Young
2017-02-01
[Purpose] The purpose of this study was to analyze the effect of Graston Technique on hamstring extensibility and pain intensity in patients with nonspecific low back pain. [Subjects and Methods] Twenty-four patients with nonspecific low back pain (27-46 years of age) enrolled in the study. All participants were randomly assigned to one of two groups: Graston technique group (n=12) and a static stretching group (n=12). The Graston Technique was used on the hamstring muscles of the experimental group, while the static stretching group performed static stretching. Hamstring extensibility was recorded using the sit and reach test, and a visual analog scale was used to measure pain intensity. [Results] Both groups showed a significant improvement after intervention. In comparison to the static stretching group, the Graston technique group had significantly more improvement in hamstring extensibility. [Conclusion] The Graston Technique is a simple and effective intervention in nonspecific low back pain patients to improve hamstring extensibility and lower pain intensity, and it would be beneficial in clinical practice.
A lightweight distributed framework for computational offloading in mobile cloud computing.
Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul
2014-01-01
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.
A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing
Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul
2014-01-01
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245
The large-scale structure of software-intensive systems
Booch, Grady
2012-01-01
The computer metaphor is dominant in most discussions of neuroscience, but the semantics attached to that metaphor are often quite naive. Herein, we examine the ontology of software-intensive systems, the nature of their structure and the application of the computer metaphor to the metaphysical questions of self and causation. PMID:23386964
Real-time processing of radar return on a parallel computer
NASA Technical Reports Server (NTRS)
Aalfs, David D.
1992-01-01
NASA is working with the FAA to demonstrate the feasibility of pulse Doppler radar as a candidate airborne sensor to detect low altitude windshears. The need to provide the pilot with timely information about possible hazards has motivated a demand for real-time processing of a radar return. Investigated here is parallel processing as a means of accommodating the high data rates required. A PC based parallel computer, called the transputer, is used to investigate issues in real time concurrent processing of radar signals. A transputer network is made up of an array of single instruction stream processors that can be networked in a variety of ways. They are easily reconfigured and software development is largely independent of the particular network topology. The performance of the transputer is evaluated in light of the computational requirements. A number of algorithms have been implemented on the transputers in OCCAM, a language specially designed for parallel processing. These include signal processing algorithms such as the Fast Fourier Transform (FFT), pulse-pair, and autoregressive modelling, as well as routing software to support concurrency. The most computationally intensive task is estimating the spectrum. Two approaches have been taken on this problem, the first and most conventional of which is to use the FFT. By using table look-ups for the basis function and other optimizing techniques, an algorithm has been developed that is sufficient for real time. The other approach is to model the signal as an autoregressive process and estimate the spectrum based on the model coefficients. This technique is attractive because it does not suffer from the spectral leakage problem inherent in the FFT. Benchmark tests indicate that autoregressive modeling is feasible in real time.
NASA Astrophysics Data System (ADS)
Perrin, A.; Ndao, M.; Manceron, L.
2017-10-01
A recent paper [1] presents a high-resolution, high-temperature version of the Nitrogen Dioxide Spectroscopic Databank called NDSD-1000. The NDSD-1000 database contains line parameters (positions, intensities, self- and air-broadening coefficients, exponents of the temperature dependence of self- and air-broadening coefficients) for numerous cold and hot bands of the 14N16O2 isotopomer of nitrogen dioxide. The parameters used for the line positions and intensities calculation were generated through a global modeling of experimental data collected in the literature within the framework of the method of effective operators. However, the form of the effective dipole moment operator used to compute the NO2 line intensities in the NDSD-1000 database differs from the classical one used for line intensities calculation in the NO2 infrared literature [12]. Using Fourier transform spectra recorded at high resolution in the 6.3 μm region, it is shown here, that the NDSD-1000 formulation is incorrect since the computed intensities do not account properly for the (Int(+)/Int(-)) intensity ratio between the (+) (J = N+ 1/2) and (-) (J = N-1/2) electron - spin rotation subcomponents of the computed vibration rotation transitions. On the other hand, in the HITRAN or GEISA spectroscopic databases, the NO2 line intensities were computed using the classical theoretical approach, and it is shown here that these data lead to a significant better agreement between the observed and calculated spectra.
Quantitative image analysis of immunohistochemical stains using a CMYK color model
Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W
2007-01-01
Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824
Decision making and preferences for acoustic signals in choice situations by female crickets.
Gabel, Eileen; Kuntze, Janine; Hennig, R Matthias
2015-08-01
Multiple attributes usually have to be assessed when choosing a mate. Efficient choice of the best mate is complicated if the available cues are not positively correlated, as is often the case during acoustic communication. Because of varying distances of signalers, a female may be confronted with signals of diverse quality at different intensities. Here, we examined how available cues are weighted for a decision by female crickets. Two songs with different temporal patterns and/or sound intensities were presented in a choice paradigm and compared with female responses from a no-choice test. When both patterns were presented at equal intensity, preference functions became wider in choice situations compared with a no-choice paradigm. When the stimuli in two-choice tests were presented at different intensities, this effect was counteracted as preference functions became narrower compared with choice tests using stimuli of equal intensity. The weighting of intensity differences depended on pattern quality and was therefore non-linear. A simple computational model based on pattern and intensity cues reliably predicted female decisions. A comparison of processing schemes suggested that the computations for pattern recognition and directionality are performed in a network with parallel topology. However, the computational flow of information corresponded to serial processing. © 2015. Published by The Company of Biologists Ltd.
Image fusion for visualization of hepatic vasculature and tumors
NASA Astrophysics Data System (ADS)
Chou, Jin-Shin; Chen, Shiuh-Yung J.; Sudakoff, Gary S.; Hoffmann, Kenneth R.; Chen, Chin-Tu; Dachman, Abraham H.
1995-05-01
We have developed segmentation and simultaneous display techniques to facilitate the visualization of the three-dimensional spatial relationships between organ structures and organ vasculature. We concentrate on the visualization of the liver based on spiral computed tomography images. Surface-based 3-D rendering and maximal intensity projection algorithms are used for data visualization. To extract the liver in the serial of images accurately and efficiently, we have developed a user-friendly interactive program with a deformable-model segmentation. Surface rendering techniques are used to visualize the extracted structures, adjacent contours are aligned and fitted with a Bezier surface to yield a smooth surface. Visualization of the vascular structures, portal and hepatic veins, is achieved by applying a MIP technique to the extracted liver volume. To integrate the extracted structures they are surface-rendered and their MIP images are aligned and a color table is designed for simultaneous display of the combined liver/tumor and vasculature images. By combining the 3-D surface rendering and MIP techniques, portal veins, hepatic veins, and hepatic tumor can be inspected simultaneously and their spatial relationships can be more easily perceived. The proposed technique will be useful for visualization of both hepatic neoplasm and vasculature in surgical planning for tumor resection or living-donor liver transplantation.
Wei, Xuelei; Dong, Fuhui
2011-12-01
To review recent advance in the research and application of computer aided forming techniques for constructing bone tissue engineering scaffolds. The literature concerning computer aided forming techniques for constructing bone tissue engineering scaffolds in recent years was reviewed extensively and summarized. Several studies over last decade have focused on computer aided forming techniques for bone scaffold construction using various scaffold materials, which is based on computer aided design (CAD) and bone scaffold rapid prototyping (RP). CAD include medical CAD, STL, and reverse design. Reverse design can fully simulate normal bone tissue and could be very useful for the CAD. RP techniques include fused deposition modeling, three dimensional printing, selected laser sintering, three dimensional bioplotting, and low-temperature deposition manufacturing. These techniques provide a new way to construct bone tissue engineering scaffolds with complex internal structures. With rapid development of molding and forming techniques, computer aided forming techniques are expected to provide ideal bone tissue engineering scaffolds.
Holographic diagnostics of biological microparticles
NASA Astrophysics Data System (ADS)
Dyomin, Victor V.; Sokolov, Vladimir V.
1996-05-01
Problem of studies of biological microojects is actual one for ecology, medicine, biology. Holographic techniques are useful to solve the problem. The above microojects are transparent or semitransparent ones in a visible light rather often. The case of an optically soft particle, (that is of a particle whose substance has the refractive index close to that of the surrounding medium) is quite probable in biological water suspensions. Some peculiarities of holographing optically soft microparticles are analyzed in this paper. We propose a technique to calculate a light intensity distribution in the plane of a hologram and in the plane of a holographic image of a particle of an arbitrary shape at an arbitrary distance from the latter plane. The efficiency of the approach proposed is demonstrated by calculational results obtained analytically for some simple cases. In a more complicated cases the technique can make a basis for numerical computations. The method of determining of refractive index of transparent and semitransparent microparticles is proposed. We also present in this paper some experimental results on holographic detection of the water drops and such optically soft particles as ovums of helmints in human jaundice.
New Ground Truth Capability from InSAR Time Series Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, S; Vincent, P; Yang, D
2005-07-13
We demonstrate that next-generation interferometric synthetic aperture radar (InSAR) processing techniques applied to existing data provide rich InSAR ground truth content for exploitation in seismic source identification. InSAR time series analyses utilize tens of interferograms and can be implemented in different ways. In one such approach, conventional InSAR displacement maps are inverted in a final post-processing step. Alternatively, computationally intensive data reduction can be performed with specialized InSAR processing algorithms. The typical final result of these approaches is a synthesized set of cumulative displacement maps. Examples from our recent work demonstrate that these InSAR processing techniques can provide appealing newmore » ground truth capabilities. We construct movies showing the areal and temporal evolution of deformation associated with previous nuclear tests. In other analyses, we extract time histories of centimeter-scale surface displacement associated with tunneling. The potential exists to identify millimeter per year surface movements when sufficient data exists for InSAR techniques to isolate and remove phase signatures associated with digital elevation model errors and the atmosphere.« less
Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction.
Chung, Adrian J; Deligianni, Fani; Shah, Pallav; Wells, Athol; Yang, Guang-Zhong
2006-04-01
This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.
Active Learning to Understand Infectious Disease Models and Improve Policy Making
Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel
2014-01-01
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings. PMID:24743387
Progress Toward an Efficient and General CFD Tool for Propulsion Design/Analysis
NASA Technical Reports Server (NTRS)
Cox, C. F.; Cinnella, P.; Westmoreland, S.
1996-01-01
The simulation of propulsive flows inherently involves chemical activity. Recent years have seen substantial strides made in the development of numerical schemes for reacting flowfields, in particular those involving finite-rate chemistry. However, finite-rate calculations are computationally intensive and require knowledge of the actual kinetics, which are not always known with sufficient accuracy. Alternatively, flow simulations based on the assumption of local chemical equilibrium are capable of obtaining physically reasonable results at far less computational cost. The present study summarizes the development of efficient numerical techniques for the simulation of flows in local chemical equilibrium, whereby a 'Black Box' chemical equilibrium solver is coupled to the usual gasdynamic equations. The generalization of the methods enables the modelling of any arbitrary mixture of thermally perfect gases, including air, combustion mixtures and plasmas. As demonstration of the potential of the methodologies, several solutions, involving reacting and perfect gas flows, will be presented. Included is a preliminary simulation of the SSME startup transient. Future enhancements to the proposed techniques will be discussed, including more efficient finite-rate and hybrid (partial equilibrium) schemes. The algorithms that have been developed and are being optimized provide for an efficient and general tool for the design and analysis of propulsion systems.
Active learning to understand infectious disease models and improve policy making.
Willem, Lander; Stijven, Sean; Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel
2014-04-01
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.
Environmental Contaminants in Hospital Settings and Progress in Disinfecting Techniques
Ceriale, Emma; Lenzi, Daniele; Burgassi, Sandra; Azzolini, Elena; Manzi, Pietro
2013-01-01
Medical devices, such as stethoscopes, and other objects found in hospital, such as computer keyboards and telephone handsets, may be reservoirs of bacteria for healthcare-associated infections. In this cross-over study involving an Italian teaching hospital we evaluated microbial contamination (total bacterial count (TBC) at 36°C/22°C, Staphylococcus spp., moulds, Enterococcus spp., Pseudomonas spp., E. coli, total coliform bacteria, Acinetobacter spp., and Clostridium difficile) of these devices before and after cleaning and differences in contamination between hospital units and between stethoscopes and keyboards plus handsets. We analysed 37 telephone handsets, 27 computer keyboards, and 35 stethoscopes, comparing their contamination in four hospital units. Wilcoxon signed-rank and Mann-Whitney tests were used. Before cleaning, many samples were positive for Staphylococcus spp. and coliforms. After cleaning, CFUs decreased to zero in most comparisons. The first aid unit had the highest and intensive care the lowest contamination (P < 0.01). Keyboards and handsets had higher TBC at 22°C (P = 0.046) and mould contamination (P = 0.002) than stethoscopes. Healthcare professionals should disinfect stethoscopes and other possible sources of bacterial healthcare-associated infections. The cleaning technique used was effective in reducing bacterial contamination. Units with high patient turnover, such as first aid, should practise stricter hygiene. PMID:24286078
Axisymmetric bluff-body flow: A vortex solver for thin shells
NASA Astrophysics Data System (ADS)
Strickland, J. H.
1992-05-01
A method which is capable of solving the axisymmetric flow field over bluff bodies consisting of thin shells such as disks, partial spheres, rings, and other such shapes is presented in this report. The body may be made up of several shells whose edges are separated by gaps. The body may be moved axially according to arbitrary velocity time histories. In addition, the surfaces may possess axial and radial degrees of flexibility such that points on the surfaces may be allowed to move relative to each other according to some specified function of time. The surfaces may be either porous or impervious. The present solution technique is based on the axisymmetric vorticity transport equation. Physically, this technique simulates the generation of vorticity at body surfaces in the form of discrete ring vortices which are subsequently diffused and convected into the boundary layers and wake of the body. Relatively large numbers of vortices (1000 or more) are required to obtain good simulations. Since the direct calculation of perturbations from large numbers of ring vortices is computationally intensive, a fast multipole method was used to greatly reduce computer processing time. Several example calculations are presented for disks, disks with holes, hemispheres, and vented hemispheres. These results are compared with steady and unsteady experimental data.
Direct Numerical Simulation of Automobile Cavity Tones
NASA Technical Reports Server (NTRS)
Kurbatskii, Konstantin; Tam, Christopher K. W.
2000-01-01
The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.
Bowtie filters for dedicated breast CT: Theory and computational implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontson, Kimberly, E-mail: Kimberly.Kontson@fda.hhs.gov; Jennings, Robert J.
Purpose: To design bowtie filters with improved properties for dedicated breast CT to improve image quality and reduce dose to the patient. Methods: The authors present three different bowtie filters designed for a cylindrical 14-cm diameter phantom with a uniform composition of 40/60 breast tissue, which vary in their design objectives and performance improvements. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis material decomposition to produce the same spectral shape and intensity at the detector, using two differentmore » materials. Bowtie design #3 eliminates the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. All three designs are obtained using analytical computational methods and linear attenuation coefficients. Thus, the designs do not take into account the effects of scatter. The authors considered this to be a reasonable approach to the filter design problem since the use of Monte Carlo methods would have been computationally intensive. The filter profiles for a cone-angle of 0° were used for the entire length of each filter because the differences between those profiles and the correct cone-beam profiles for the cone angles in our system are very small, and the constant profiles allowed construction of the filters with the facilities available to us. For evaluation of the filters, we used Monte Carlo simulation techniques and the full cone-beam geometry. Images were generated with and without each bowtie filter to analyze the effect on dose distribution, noise uniformity, and contrast-to-noise ratio (CNR) homogeneity. Line profiles through the reconstructed images generated from the simulated projection images were also used as validation for the filter designs. Results: Examples of the three designs are presented. Initial verification of performance of the designs was done using analytical computations of HVL, intensity, and effective attenuation coefficient behind the phantom as a function of fan-angle with a cone-angle of 0°. The performance of the designs depends only weakly on incident spectrum and tissue composition. For all designs, the dynamic range requirement on the detector was reduced compared to the no-bowtie-filter case. Further verification of the filter designs was achieved through analysis of reconstructed images from simulations. Simulation data also showed that the use of our bowtie filters can reduce peripheral dose to the breast by 61% and provide uniform noise and CNR distributions. The bowtie filter design concepts validated in this work were then used to create a computational realization of a 3D anthropomorphic bowtie filter capable of achieving a constant effective attenuation coefficient behind the entire field-of-view of an anthropomorphic breast phantom. Conclusions: Three different bowtie filter designs that vary in performance improvements were described and evaluated using computational and simulation techniques. Results indicate that the designs are robust against variations in breast diameter, breast composition, and tube voltage, and that the use of these filters can reduce patient dose and improve image quality compared to the no-bowtie-filter case.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert P. Lucht
Laser-induced polarization spectroscopy (LIPS), degenerate four-wave mixing (DFWM), and electronic-resonance-enhanced (ERE) coherent anti-Stokes Raman scattering (CARS) are techniques that shows great promise for sensitive measurements of transient gas-phase species, and diagnostic applications of these techniques are being pursued actively at laboratories throughout the world. However, significant questions remain regarding strategies for quantitative concentration measurements using these techniques. The primary objective of this research program is to develop and test strategies for quantitative concentration measurements in flames and plasmas using these nonlinear optical techniques. Theoretically, we are investigating the physics of these processes by direct numerical integration (DNI) of the time-dependentmore » density matrix equations that describe the wave-mixing interaction. Significantly fewer restrictive assumptions are required when the density matrix equations are solved using this DNI approach compared with the assumptions required to obtain analytical solutions. For example, for LIPS calculations, the Zeeman state structure and hyperfine structure of the resonance and effects such as Doppler broadening can be included. There is no restriction on the intensity of the pump and probe beams in these nonperturbative calculations, and both the pump and probe beam intensities can be high enough to saturate the resonance. As computer processing speeds have increased, we have incorporated more complicated physical models into our DNI codes. During the last project period we developed numerical methods for nonperturbative calculations of the two-photon absorption process. Experimentally, diagnostic techniques are developed and demonstrated in gas cells and/or well-characterized flames for ease of comparison with model results. The techniques of two-photon, two-color H-atom LIPS and three-laser ERE CARS for NO and C{sub 2}H{sub 2} were demonstrated during the project period, and nonperturbative numerical models of both of these techniques were developed. In addition, we developed new single-mode, injection-seeded optical parametric laser sources (OPLSs) that will be used to replace multi-mode commercial dye lasers in our experimental measurements. The use of single-mode laser radiation in our experiments will increase significantly the rigor with which theory and experiment are compared.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, J; Braunstein, S; McDermott, M
Purpose: Sharp dose fall-off is the hallmark of brain radiosurgery to deliver a high dose of radiation to the target while minimizing dose to normal brain tissue. In this study, we developed a technique for the purpose of enhancing the peripheral dose gradient by magnifying the total number of beams focused toward each isocenter via patient head tilt and simultaneous beam intensity modulations. Methods: Computer scripting for the proposed beam number enhancement (BNE) technique was developed. The technique was tested and then implemented on a clinical treatment planning system for a dedicated brain radiosurgical system (GK Perfexion, Elekta Oncology). Tomore » study technical feasibility and dosimetric advantages of the technique, we compared treatment planning quality and delivery efficiency for 20 radiosurgical cases previously treated at our institution. These cases included relatively complex treatments such as acoustic schwannoma, meningioma, brain metastasis and mesial temporal lobe epilepsy. Results: The BNE treatment plans were found to produce nearly identical target volume coverage (absolute value < 0.5%, P > 0.2) and dose conformity (BNE CI= 1.41±0.15 versus 1.41±0.20, P>0.9) as the original treatment plans. The total beam-on time for theBNE treatment plans were comparable (within 1.0 min or 1.8%) with those of the original treatment plans for all the cases. However, BNE treatment plans significantly improved the mean gradient index (BNE GI = 2.9±0.3 versus original GI =3.0±0.3 p<0.0001) and low-level isodose volumes, e.g. 20-50% prescribed isodose volumes, by 2.0% to 5.0% (p<0.02). Furthermore, with 4 to 5-fold increase in the total number of beams, the GI decreased by as much as 20% or 0.5 in absolute values. Conclusion: BNE via head tilt and simultaneous beam intensity modulation is an effective and efficient technique that physically sharpens the peripheral dose gradient for brain radiosurgery.« less
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
Ali, Syed Mashhood; Shamim, Shazia
2015-07-01
Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Simulating Quantile Models with Applications to Economics and Management
NASA Astrophysics Data System (ADS)
Machado, José A. F.
2010-05-01
The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.
Madeleine, Pascal; Vangsgaard, Steffen; Hviid Andersen, Johan; Ge, Hong-You; Arendt-Nielsen, Lars
2013-08-01
Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD).The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users.
Combined mine tremors source location and error evaluation in the Lubin Copper Mine (Poland)
NASA Astrophysics Data System (ADS)
Leśniak, Andrzej; Pszczoła, Grzegorz
2008-08-01
A modified method of mine tremors location used in Lubin Copper Mine is presented in the paper. In mines where an intensive exploration is carried out a high accuracy source location technique is usually required. The effect of the flatness of the geophones array, complex geological structure of the rock mass and intense exploitation make the location results ambiguous in such mines. In the present paper an effective method of source location and location's error evaluations are presented, combining data from two different arrays of geophones. The first consists of uniaxial geophones spaced in the whole mine area. The second is installed in one of the mining panels and consists of triaxial geophones. The usage of the data obtained from triaxial geophones allows to increase the hypocenter vertical coordinate precision. The presented two-step location procedure combines standard location methods: P-waves directions and P-waves arrival times. Using computer simulations the efficiency of the created algorithm was tested. The designed algorithm is fully non-linear and was tested on the multilayered rock mass model of the Lubin Copper Mine, showing a computational better efficiency than the traditional P-wave arrival times location algorithm. In this paper we present the complete procedure that effectively solves the non-linear location problems, i.e. the mine tremor location and measurement of the error propagation.
Effects of unilateral laser-assisted ventriculocordectomy in horses with laryngeal hemiplegia.
Robinson, P; Derksen, F J; Stick, J A; Sullins, K E; DeTolve, P G; Robinson, N E
2006-11-01
Recent studies have evaluated surgical techniques aimed at reducing noise and improving airway function in horses with recurrent laryngeal neuropathy (RLN). These techniques require general anaesthesia and are invasive. A minimally invasive transnasal surgical technique for treatment of RLN that may be employed in the standing, sedated horse would be advantageous. To determine whether unilateral laser-assisted ventriculocordectomy (LVC) improves upper airway function and reduces noise during inhalation in exercising horses with laryngeal hemiplegia (LH). Six Standardbred horses were used; respiratory sound and inspiratory transupper airway pressure (Pui) measured before and after induction of LH, and 60, 90 and 120 days after LVC. Inspiratory sound level (SL) and the sound intensities of formants 1, 2 and 3 (Fl, F2 and F3, respectively), were measured using computer-based sound analysis programmes. In addition, upper airway endoscopy was performed at each time interval, at rest and during treadmill exercise. In LH-affected horses, Pui, SL and the sound intensity of F2 and F3 were increased significantly from baseline values. At 60 days after LVC, Pui and SL had returned to baseline, and F2 and F3 values had improved partially compared to LH values. At 90 and 120 days, however, SL increased again to LH levels. LVC decreases LH-associated airway obstruction by 60 days after surgery, and reduces inspiratory noise but not as effectively as bilateral ventriculocordectomy. LVC may be recommended as a treatment of LH, where reduction of upper airway obstruction and respiratory noise is desired and the owner wishes to avoid risks associated with a laryngotomy incision or general anaesthesia.
Song, Sanghyuk; Chang, Ji Hyun; Kim, Hak Jae; Kim, Yeon Sil; Kim, Jin Hee; Ahn, Yong Chan; Kim, Jae-Sung; Song, Si Yeol; Moon, Sung Ho; Cho, Moon June; Youn, Seon Min
2017-07-01
Stereotactic ablative radiotherapy (SABR) is an effective emerging technique for early-stage non-small cell lung cancer (NSCLC). We investigated the current practice of SABR for early-stage NSCLC in Korea. We conducted a nationwide survey of SABR for NSCLC by sending e-mails to all board-certified members of the Korean Society for Radiation Oncology. The survey included 23 questions focusing on the technical aspects of SABR and 18 questions seeking the participants' opinions on specific clinical scenarios in the use of SABR for early-stage NSCLC. Overall, 79 radiation oncologists at 61/85 specialist hospitals in Korea (71.8%) responded to the survey. SABR was used at 33 institutions (54%) to treat NSCLC. Regarding technical aspects, the most common planning methods were the rotational intensity-modulated technique (59%) and the static intensity-modulated technique (49%). Respiratory motion was managed by gating (54%) or abdominal compression (51%), and 86% of the planning scans were obtained using 4-dimensional computed tomography. In the clinical scenarios, the most commonly chosen fractionation schedule for peripherally located T1 NSCLC was 60 Gy in four fractions. For centrally located tumors and T2 NSCLC, the oncologists tended to avoid SABR for radiotherapy, and extended the fractionation schedule. The results of our survey indicated that SABR is increasingly being used to treat NSCLC in Korea. However, there were wide variations in the technical protocols and fractionation schedules of SABR for early-stage NSCLC among institutions. Standardization of SABR is necessary before implementing nationwide, multicenter, randomized studies.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
Fundamental limits to superresolution fluorescence microscopy
NASA Astrophysics Data System (ADS)
Small, Alex
2013-02-01
Superresolution fluorescence microscopy techniques such as PALM, STORM, STED, and Structured Illumination Microscopy (SIM) enable imaging of live cells at nanometer resolution. The common theme in all of these techniques is that the diffraction limit is circumvented by controlling the states of fluorescent molecules. Although the samples are labeled very densely (i.e. with spacing much smaller than the Airy distance), not all of the molecules are emitting at the same time. Consequently, one does not encounter overlapping blurs. In the deterministic techniques (STED, SIM) the achievable resolution scales as the wavelength of light divided by the square root of the intensity of a beam used to control the fluorescent state. In the stochastic techniques (PALM, STORM), the achievable resolution scales as the wavelength of light divided by the square root of the number of photons collected. Although these limits arise from very different mechanisms (parabolic beam profiles for STED and SIM, statistics for PALM and STORM), in all cases the resolution scales inversely with the square root of a measure of the number of photons used in the experiment. We have developed a proof that this relationship between resolution and photon count is universal to techniques that control the states of fluorophores using classical light. Our proof encompasses linear and nonlinear optics, as well as computational post-processing techniques for extracting information beyond the diffraction limit. If there are techniques that can achieve a more efficient relationship between resolution and photon count, those techniques will require light exhibiting non-classical correlations.
Schwalenberg, Simon
2005-06-01
The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.
Techniques to derive geometries for image-based Eulerian computations
Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.
2014-01-01
Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID:25750470
NASA Astrophysics Data System (ADS)
Shahedi, Maysam; Fenster, Aaron; Cool, Derek W.; Romagnoli, Cesare; Ward, Aaron D.
2013-03-01
3D segmentation of the prostate in medical images is useful to prostate cancer diagnosis and therapy guidance, but is time-consuming to perform manually. Clinical translation of computer-assisted segmentation algorithms for this purpose requires a comprehensive and complementary set of evaluation metrics that are informative to the clinical end user. We have developed an interactive 3D prostate segmentation method for 1.5T and 3.0T T2-weighted magnetic resonance imaging (T2W MRI) acquired using an endorectal coil. We evaluated our method against manual segmentations of 36 3D images using complementary boundary-based (mean absolute distance; MAD), regional overlap (Dice similarity coefficient; DSC) and volume difference (ΔV) metrics. Our technique is based on inter-subject prostate shape and local boundary appearance similarity. In the training phase, we calculated a point distribution model (PDM) and a set of local mean intensity patches centered on the prostate border to capture shape and appearance variability. To segment an unseen image, we defined a set of rays - one corresponding to each of the mean intensity patches computed in training - emanating from the prostate centre. We used a radial-based search strategy and translated each mean intensity patch along its corresponding ray, selecting as a candidate the boundary point with the highest normalized cross correlation along each ray. These boundary points were then regularized using the PDM. For the whole gland, we measured a mean+/-std MAD of 2.5+/-0.7 mm, DSC of 80+/-4%, and ΔV of 1.1+/-8.8 cc. We also provided an anatomic breakdown of these metrics within the prostatic base, mid-gland, and apex.
SURROGATE MODEL DEVELOPMENT AND VALIDATION FOR RELIABILITY ANALYSIS OF REACTOR PRESSURE VESSELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, William M.; Riley, Matthew E.; Spencer, Benjamin W.
In nuclear light water reactors (LWRs), the reactor coolant, core and shroud are contained within a massive, thick walled steel vessel known as a reactor pressure vessel (RPV). Given the tremendous size of these structures, RPVs typically contain a large population of pre-existing flaws introduced in the manufacturing process. After many years of operation, irradiation-induced embrittlement makes these vessels increasingly susceptible to fracture initiation at the locations of the pre-existing flaws. Because of the uncertainty in the loading conditions, flaw characteristics and material properties, probabilistic methods are widely accepted and used in assessing RPV integrity. The Fracture Analysis of Vesselsmore » – Oak Ridge (FAVOR) computer program developed by researchers at Oak Ridge National Laboratory is widely used for this purpose. This program can be used in order to perform deterministic and probabilistic risk-informed analyses of the structural integrity of an RPV subjected to a range of thermal-hydraulic events. FAVOR uses a one-dimensional representation of the global response of the RPV, which is appropriate for the beltline region, which experiences the most embrittlement, and employs an influence coefficient technique to rapidly compute stress intensity factors for axis-aligned surface-breaking flaws. The Grizzly code is currently under development at Idaho National Laboratory (INL) to be used as a general multiphysics simulation tool to study a variety of degradation mechanisms in nuclear power plant components. The first application of Grizzly has been to study fracture in embrittled RPVs. Grizzly can be used to model the thermo-mechanical response of an RPV under transient conditions observed in a pressurized thermal shock (PTS) scenario. The global response of the vessel provides boundary conditions for local 3D models of the material in the vicinity of a flaw. Fracture domain integrals are computed to obtain stress intensity factors, which can in turn be used to assess whether a fracture would initiate at a pre-existing flaw. To use Grizzly for probabilistic analysis, it is necessary to have a way to rapidly evaluate stress intensity factors. To accomplish this goal, a reduced order model (ROM) has been developed to efficiently represent the behavior of a detailed 3D Grizzly model used to calculate fracture parameters. This approach uses the stress intensity factor influence coefficient method that has been used with great success in FAVOR. Instead of interpolating between tabulated solutions, as FAVOR does, the ROM approach uses a response surface methodology to compute fracture solutions based on a sampled set of results used to train the ROM. The main advantages of this approach are that the process of generating the training data can be fully automated, and the procedure can be readily used to consider more general flaw configurations. This paper demonstrates the procedure used to generate a ROM to rapidly compute stress intensity factors for axis-aligned flaws. The results from this procedure are in good agreement with those produced using the traditional influence coefficient interpolation procedure, which gives confidence in this method. This paves the way for applying this procedure for more general flaw configurations.« less
Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa
2015-04-13
Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Eulerian-Lagrangian Model to Simulate Two-Phase/Particulate Flows
NASA Technical Reports Server (NTRS)
Apte, S. V.; Mahesh, K.; Lundgren, T.
2003-01-01
Figure 1 shows a snapshot of liquid fuel spray coming out of an injector nozzle in a realistic gas-turbine combustor. Here the spray atomization was simulated using a stochastic secondary breakup model (Apte et al. 2003a) with point-particle approximation for the droplets. Very close to the injector, it is observed that the spray density is large and the droplets cannot be treated as point-particles. The volume displaced by the liquid in this region is significant and can alter the gas-phase ow and spray evolution. In order to address this issue, one can compute the dense spray regime by an Eulerian-Lagrangian technique using advanced interface tracking/level-set methods (Sussman et al. 1994; Tryggvason et al. 2001; Herrmann 2003). This, however, is computationally intensive and may not be viable in realistic complex configurations. We therefore plan to develop a methodology based on Eulerian-Lagrangian technique which will allow us to capture the essential features of primary atomization using models to capture interactions between the fluid and droplets and which can be directly applied to the standard atomization models used in practice. The numerical scheme for unstructured grids developed by Mahesh et al. (2003) for incompressible flows is modified to take into account the droplet volume fraction. The numerical framework is directly applicable to realistic combustor geometries. Our main objectives in this work are: Develop a numerical formulation based on Eulerian-Lagrangian techniques with models for interaction terms between the fluid and particles to capture the Kelvin- Helmholtz type instabilities observed during primary atomization. Validate this technique for various two-phase and particulate flows. Assess its applicability to capture primary atomization of liquid jets in conjunction with secondary atomization models.
Automated Creation of Labeled Pointcloud Datasets in Support of Machine-Learning Based Perception
2017-12-01
computationally intensive 3D vector math and took more than ten seconds to segment a single LIDAR frame from the HDL-32e with the Dell XPS15 9650’s Intel...Core i7 CPU. Depth Clustering avoids the computationally intensive 3D vector math of Euclidean Clustering-based DON segmentation and, instead
PNNL Data-Intensive Computing for a Smarter Energy Grid
Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria
2017-12-09
The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.
NASA Astrophysics Data System (ADS)
Brodyn, M. S.; Starkov, V. N.
2007-07-01
It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.
Application of contrast media in post-mortem imaging (CT and MRI).
Grabherr, Silke; Grimm, Jochen; Baumann, Pia; Mangin, Patrice
2015-09-01
The application of contrast media in post-mortem radiology differs from clinical approaches in living patients. Post-mortem changes in the vascular system and the absence of blood flow lead to specific problems that have to be considered for the performance of post-mortem angiography. In addition, interpreting the images is challenging due to technique-related and post-mortem artefacts that have to be known and that are specific for each applied technique. Although the idea of injecting contrast media is old, classic methods are not simply transferable to modern radiological techniques in forensic medicine, as they are mostly dedicated to single-organ studies or applicable only shortly after death. With the introduction of modern imaging techniques, such as post-mortem computed tomography (PMCT) and post-mortem magnetic resonance (PMMR), to forensic death investigations, intensive research started to explore their advantages and limitations compared to conventional autopsy. PMCT has already become a routine investigation in several centres, and different techniques have been developed to better visualise the vascular system and organ parenchyma in PMCT. In contrast, the use of PMMR is still limited due to practical issues, and research is now starting in the field of PMMR angiography. This article gives an overview of the problems in post-mortem contrast media application, the various classic and modern techniques, and the issues to consider by using different media.
Digital computer technique for setup and checkout of an analog computer
NASA Technical Reports Server (NTRS)
Ambaruch, R.
1968-01-01
Computer program technique, called Analog Computer Check-Out Routine Digitally /ACCORD/, generates complete setup and checkout data for an analog computer. In addition, the correctness of the analog program implementation is validated.
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Automated ambiguity estimation for VLBI Intensive sessions using L1-norm
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a space-geodetic technique that is uniquely capable of direct observation of the angle of the Earth's rotation about the Celestial Intermediate Pole (CIP) axis, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) provided by the 1-h long VLBI Intensive sessions are essential in providing timely UT1 estimates for satellite navigation systems and orbit determination. In order to produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This involves the automatic processing of X- and S-band group delays. These data contain an unknown number of integer ambiguities in the observed group delays. They are introduced as a side-effect of the bandwidth synthesis technique, which is used to combine correlator results from the narrow channels that span the individual bands. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimisation). We implement L1-norm as an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions on the Kokee-Wettzell baseline. The results are compared to an analysis set-up where the ambiguity estimation is computed using the L2-norm. For both methods three different weighting strategies for the ambiguity estimation are assessed. The results show that the L1-norm is better at automatically resolving the ambiguities than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies. The increase in the number of sessions is approximately 5% for each weighting strategy. This is accompanied by smaller post-fit residuals in the final UT1-UTC estimation step.
MR-guided adaptive focusing of therapeutic ultrasound beams in the human head
Marsac, Laurent; Chauvet, Dorian; Larrat, Benoît; Pernot, Mathieu; Robert, B.; Fink, Mathias; Boch, Anne-Laure; Aubry, Jean-François; Tanter, Mickaël
2012-01-01
Purpose This study aims to demonstrate, using human cadavers the feasibility of energy-based adaptive focusing of ultrasonic waves using Magnetic Resonance Acoustic Radiation Force Imaging (MR-ARFI) in the framework of non-invasive transcranial High Intensity Focused Ultrasound (HIFU) therapy. Methods Energy-based adaptive focusing techniques were recently proposed in order to achieve aberration correction. We evaluate this method on a clinical brain HIFU system composed of 512 ultrasonic elements positioned inside a full body 1.5 T clinical Magnetic Resonance (MR) imaging system. Cadaver heads were mounted onto a clinical Leksell stereotactic frame. The ultrasonic wave intensity at the chosen location was indirectly estimated by the MR system measuring the local tissue displacement induced by the acoustic radiation force of the ultrasound (US) beams. For aberration correction, a set of spatially encoded ultrasonic waves was transmitted from the ultrasonic array and the resulting local displacements were estimated with the MR-ARFI sequence for each emitted beam. A non-iterative inversion process was then performed in order to estimate the spatial phase aberrations induced by the cadaver skull. The procedure was first evaluated and optimized in a calf brain using a numerical aberrator mimicking human skull aberrations. The full method was then demonstrated using a fresh human cadaver head. Results The corrected beam resulting from the direct inversion process was found to focus at the targeted location with an acoustic intensity 2.2 times higher than the conventional non corrected beam. In addition, this corrected beam was found to give an acoustic intensity 1.5 times higher than the focusing pattern obtained with an aberration correction using transcranial acoustic simulation based on X-ray computed tomography (CT) scans. Conclusion The proposed technique achieved near optimal focusing in an intact human head for the first time. These findings confirm the strong potential of energy-based adaptive focusing of transcranial ultrasonic beams for clinical applications. PMID:22320825