Sample records for computationally intensive functions

  1. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Methods of computational physics in the problem of mathematical interpretation of laser investigations

    NASA Astrophysics Data System (ADS)

    Brodyn, M. S.; Starkov, V. N.

    2007-07-01

    It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.

  2. FUNCTION GENERATOR FOR ANALOGUE COMPUTERS

    DOEpatents

    Skramstad, H.K.; Wright, J.H.; Taback, L.

    1961-12-12

    An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)

  3. Decision making and preferences for acoustic signals in choice situations by female crickets.

    PubMed

    Gabel, Eileen; Kuntze, Janine; Hennig, R Matthias

    2015-08-01

    Multiple attributes usually have to be assessed when choosing a mate. Efficient choice of the best mate is complicated if the available cues are not positively correlated, as is often the case during acoustic communication. Because of varying distances of signalers, a female may be confronted with signals of diverse quality at different intensities. Here, we examined how available cues are weighted for a decision by female crickets. Two songs with different temporal patterns and/or sound intensities were presented in a choice paradigm and compared with female responses from a no-choice test. When both patterns were presented at equal intensity, preference functions became wider in choice situations compared with a no-choice paradigm. When the stimuli in two-choice tests were presented at different intensities, this effect was counteracted as preference functions became narrower compared with choice tests using stimuli of equal intensity. The weighting of intensity differences depended on pattern quality and was therefore non-linear. A simple computational model based on pattern and intensity cues reliably predicted female decisions. A comparison of processing schemes suggested that the computations for pattern recognition and directionality are performed in a network with parallel topology. However, the computational flow of information corresponded to serial processing. © 2015. Published by The Company of Biologists Ltd.

  4. Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.

    PubMed

    Das, Biman; Drake, Eli; Jack, John

    2004-02-01

    Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.

  5. Stress Intensity Factors for Cracking Metal Structures under Rapid Thermal Loading. Volume 2. Theoretical Background

    DTIC Science & Technology

    1989-08-01

    thermal pulse loadings. The work couples a Green’s function integration technique for transient thermal stresses with the well-known influence ... function approach for calculating stress intensity factors. A total of seven most commonly used crack models were investigated in this study. A computer

  6. Functional dissociation of stimulus intensity encoding and predictive coding of pain in the insula

    PubMed Central

    Geuter, Stephan; Boll, Sabrina; Eippert, Falk; Büchel, Christian

    2017-01-01

    The computational principles by which the brain creates a painful experience from nociception are still unknown. Classic theories suggest that cortical regions either reflect stimulus intensity or additive effects of intensity and expectations, respectively. By contrast, predictive coding theories provide a unified framework explaining how perception is shaped by the integration of beliefs about the world with mismatches resulting from the comparison of these beliefs against sensory input. Using functional magnetic resonance imaging during a probabilistic heat pain paradigm, we investigated which computations underlie pain perception. Skin conductance, pupil dilation, and anterior insula responses to cued pain stimuli strictly followed the response patterns hypothesized by the predictive coding model, whereas posterior insula encoded stimulus intensity. This novel functional dissociation of pain processing within the insula together with previously observed alterations in chronic pain offer a novel interpretation of aberrant pain processing as disturbed weighting of predictions and prediction errors. DOI: http://dx.doi.org/10.7554/eLife.24770.001 PMID:28524817

  7. A robust and fast active contour model for image segmentation with intensity inhomogeneity

    NASA Astrophysics Data System (ADS)

    Ding, Keyan; Weng, Guirong

    2018-04-01

    In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.

  8. Accuracy of RGD approximation for computing light scattering properties of diffusing and motile bacteria. [Rayleigh-Gans-Debye

    NASA Technical Reports Server (NTRS)

    Kottarchyk, M.; Chen, S.-H.; Asano, S.

    1979-01-01

    The study tests the accuracy of the Rayleigh-Gans-Debye (RGD) approximation against a rigorous scattering theory calculation for a simplified model of E. coli (about 1 micron in size) - a solid spheroid. A general procedure is formulated whereby the scattered field amplitude correlation function, for both polarized and depolarized contributions, can be computed for a collection of particles. An explicit formula is presented for the scattered intensity, both polarized and depolarized, for a collection of randomly diffusing or moving particles. Two specific cases for the intermediate scattering functions are considered: diffusing particles and freely moving particles with a Maxwellian speed distribution. The formalism is applied to microorganisms suspended in a liquid medium. Sensitivity studies revealed that for values of the relative index of refraction greater than 1.03, RGD could be in serious error in computing the intensity as well as correlation functions.

  9. Experimental Determination of the 1 Sigma(+) State Electric-Dipole-Moment Function of Carbon Monoxide up to a Large Internuclear Separation

    NASA Technical Reports Server (NTRS)

    Chackerian, C., Jr.; Farreng, R.; Guelachvili, G.; Rossetti, C.; Urban, W.

    1984-01-01

    Experimental intensity information is combined with numerically obtained vibrational wave functions in a nonlinear least squares fitting procedure to obtain the ground electronic state electric-dipole-moment function of carbon monoxide valid in the range of nuclear oscillation (0.87 to 1.01 A) of about the V = 38th vibrational level. Mechanical anharmonicity intensity factors, H, are computed from this function for delta V + = 1, 2, 3, with or = to 38.

  10. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  11. Ab initio calculation of resonant Raman intensities of transition metal dichalcogenides

    NASA Astrophysics Data System (ADS)

    Miranda, Henrique; Reichardt, Sven; Molina-Sanchez, Alejandro; Wirtz, Ludger

    Raman spectroscopy is used to characterize optical and vibrational properties of materials. Its computational simulation is important for the interpretation of experimental results. Two approaches are the bond polarizability model and density functional perturbation theory. However, both are known to not capture resonance effects. These resonances and quantum interference effects are important to correctly reproduce the intensities as a function of laser energy as, e.g., reported for the case of multi-layer MoTe21.We present two fully ab initio approaches that overcome this limitation. In the first, we calculate finite difference derivatives of the dielectric susceptibility with the phonon displacements2. In the second we calculate electron-light and electron-phonon matrix elements from density functional theory and use them to evaluate expressions for the Raman intensity derived from time-dependent perturbation theory. These expressions are implemented in a computer code that performs the calculations as a post-processing step. We compare both methods and study the case of triple-layer MoTe2. Luxembourg National Research Fund (FNR).

  12. Differences in muscle load between computer and non-computer work among office workers.

    PubMed

    Richter, J M; Mathiassen, S E; Slijper, H P; Over, E A B; Frens, M A

    2009-12-01

    Introduction of more non-computer tasks has been suggested to increase exposure variation and thus reduce musculoskeletal complaints (MSC) in computer-intensive office work. This study investigated whether muscle activity did, indeed, differ between computer and non-computer activities. Whole-day logs of input device use in 30 office workers were used to identify computer and non-computer work, using a range of classification thresholds (non-computer thresholds (NCTs)). Exposure during these activities was assessed by bilateral electromyography recordings from the upper trapezius and lower arm. Contrasts in muscle activity between computer and non-computer work were distinct but small, even at the individualised, optimal NCT. Using an average group-based NCT resulted in less contrast, even in smaller subgroups defined by job function or MSC. Thus, computer activity logs should be used cautiously as proxies of biomechanical exposure. Conventional non-computer tasks may have a limited potential to increase variation in muscle activity during computer-intensive office work.

  13. SU-F-J-94: Development of a Plug-in Based Image Analysis Tool for Integration Into Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, D; Anderson, C; Mayo, C

    Purpose: To extend the functionality of a commercial treatment planning system (TPS) to support (i) direct use of quantitative image-based metrics within treatment plan optimization and (ii) evaluation of dose-functional volume relationships to assist in functional image adaptive radiotherapy. Methods: A script was written that interfaces with a commercial TPS via an Application Programming Interface (API). The script executes a program that performs dose-functional volume analyses. Written in C#, the script reads the dose grid and correlates it with image data on a voxel-by-voxel basis through API extensions that can access registration transforms. A user interface was designed through WinFormsmore » to input parameters and display results. To test the performance of this program, image- and dose-based metrics computed from perfusion SPECT images aligned to the treatment planning CT were generated, validated, and compared. Results: The integration of image analysis information was successfully implemented as a plug-in to a commercial TPS. Perfusion SPECT images were used to validate the calculation and display of image-based metrics as well as dose-intensity metrics and histograms for defined structures on the treatment planning CT. Various biological dose correction models, custom image-based metrics, dose-intensity computations, and dose-intensity histograms were applied to analyze the image-dose profile. Conclusion: It is possible to add image analysis features to commercial TPSs through custom scripting applications. A tool was developed to enable the evaluation of image-intensity-based metrics in the context of functional targeting and avoidance. In addition to providing dose-intensity metrics and histograms that can be easily extracted from a plan database and correlated with outcomes, the system can also be extended to a plug-in optimization system, which can directly use the computed metrics for optimization of post-treatment tumor or normal tissue response models. Supported by NIH - P01 - CA059827.« less

  14. Crosstalk Cancellation for a Simultaneous Phase Shifting Interferometer

    NASA Technical Reports Server (NTRS)

    Olczak, Eugene (Inventor)

    2014-01-01

    A method of minimizing fringe print-through in a phase-shifting interferometer, includes the steps of: (a) determining multiple transfer functions of pixels in the phase-shifting interferometer; (b) computing a crosstalk term for each transfer function; and (c) displaying, to a user, a phase-difference map using the crosstalk terms computed in step (b). Determining a transfer function in step (a) includes measuring intensities of a reference beam and a test beam at the pixels, and measuring an optical path difference between the reference beam and the test beam at the pixels. Computing crosstalk terms in step (b) includes computing an N-dimensional vector, where N corresponds to the number of transfer functions, and the N-dimensional vector is obtained by minimizing a variance of a modulation function in phase shifted images.

  15. Variational Calculations of Ro-Vibrational Energy Levels and Transition Intensities for Tetratomic Molecules

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Langhoff, Stephen R. (Technical Monitor)

    1995-01-01

    A description is given of an algorithm for computing ro-vibrational energy levels for tetratomic molecules. The expressions required for evaluating transition intensities are also given. The variational principle is used to determine the energy levels and the kinetic energy operator is simple and evaluated exactly. The computational procedure is split up into the determination of one dimensional radial basis functions, the computation of a contracted rotational-bending basis, followed by a final variational step coupling all degrees of freedom. An angular basis is proposed whereby the rotational-bending contraction takes place in three steps. Angular matrix elements of the potential are evaluated by expansion in terms of a suitable basis and the angular integrals are given in a factorized form which simplifies their evaluation. The basis functions in the final variational step have the full permutation symmetries of the identical particles. Sample results are given for HCCH and BH3.

  16. A combined vector potential-scalar potential method for FE computation of 3D magnetic fields in electrical devices with iron cores

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1991-01-01

    A method of combined use of magnetic vector potential based finite-element (FE) formulations and magnetic scalar potential (MSP) based formulations for computation of three-dimensional magnetostatic fields is introduced. In this method, the curl-component of the magnetic field intensity is computed by a reduced magnetic vector potential. This field intensity forms the basic of a forcing function for a global magnetic scalar potential solution over the entire volume of the region. This method allows one to include iron portions sandwiched in between conductors within partitioned current-carrying subregions. The method is most suited for large-scale global-type 3-D magnetostatic field computations in electrical devices, and in particular rotating electric machinery.

  17. An independent software system for the analysis of dynamic MR images.

    PubMed

    Torheim, G; Lombardi, M; Rinck, P A

    1997-01-01

    A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.

  18. Point spread function based classification of regions for linear digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Israni, Kenny; Avinash, Gopal; Li, Baojun

    2007-03-01

    In digital tomosynthesis, one of the limitations is the presence of out-of-plane blur due to the limited angle acquisition. The point spread function (PSF) characterizes blur in the imaging volume, and is shift-variant in tomosynthesis. The purpose of this research is to classify the tomosynthesis imaging volume into four different categories based on PSF-driven focus criteria. We considered linear tomosynthesis geometry and simple back projection algorithm for reconstruction. The three-dimensional PSF at every pixel in the imaging volume was determined. Intensity profiles were computed for every pixel by integrating the PSF-weighted intensities contained within the line segment defined by the PSF, at each slice. Classification rules based on these intensity profiles were used to categorize image regions. At background and low-frequency pixels, the derived intensity profiles were flat curves with relatively low and high maximum intensities respectively. At in-focus pixels, the maximum intensity of the profiles coincided with the PSF-weighted intensity of the pixel. At out-of-focus pixels, the PSF-weighted intensity of the pixel was always less than the maximum intensity of the profile. We validated our method using human observer classified regions as gold standard. Based on the computed and manual classifications, the mean sensitivity and specificity of the algorithm were 77+/-8.44% and 91+/-4.13% respectively (t=-0.64, p=0.56, DF=4). Such a classification algorithm may assist in mitigating out-of-focus blur from tomosynthesis image slices.

  19. Intensive training of motor function and functional skills among young children with cerebral palsy: a systematic review and meta-analysis.

    PubMed

    Tinderholt Myrhaug, Hilde; Østensjø, Sigrid; Larun, Lillebeth; Odgaard-Jensen, Jan; Jahnsen, Reidun

    2014-12-05

    Young children with cerebral palsy (CP) receive a variety of interventions to prevent and/or reduce activity limitations and participation restrictions. Some of these interventions are intensive, and it is a challenge to identify the optimal intensity. Therefore, the objective of this systematic review was to describe and categorise intensive motor function and functional skills training among young children with CP, to summarise the effects of these interventions, and to examine characteristics that may contribute to explain the variations in these effects. Ten databases were searched for controlled studies that included young children (mean age less than seven years old) with CP and assessments of the effects of intensive motor function and functional skills training. The studies were critically assessed by the Risk of bias tool (RoB) and categorised for intensity and contexts of interventions. Standardised mean difference were computed for outcomes, and summarised descriptively or in meta-analyses. Thirty-eight studies were included. Studies that targeted gross motor function were fewer, older and with lower frequency of training sessions over longer training periods than studies that targeted hand function. Home training was most common in studies on hand function and functional skills, and often increased the amount of training. The effects of constraint induced movement therapy (CIMT) on hand function and functional skills were summarised in six meta-analyses, which supported the existing evidence of CIMT. In a majority of the included studies, equal improvements were identified between intensive intervention and conventional therapy or between two different intensive interventions. Different types of training, different intensities and different contexts between studies that targeted gross and fine motor function might explain some of the observed effect variations. Home training may increase the amount of training, but are less controllable. These factors may have contributed to the observed variations in the effectiveness of CIMT. Rigorous research on intensive gross motor training is needed. CRD42013004023.

  20. On the Fast Evaluation Method of Temperature and Gas Mixing Ratio Weighting Functions for Remote Sensing of Planetary Atmospheres in Thermal IR and Microwave

    NASA Technical Reports Server (NTRS)

    Ustinov, E. A.

    1999-01-01

    Evaluation of weighting functions in the atmospheric remote sensing is usually the most computer-intensive part of the inversion algorithms. We present an analytic approach to computations of temperature and mixing ratio weighting functions that is based on our previous results but the resulting expressions use the intermediate variables that are generated in computations of observable radiances themselves. Upwelling radiances at the given level in the atmosphere and atmospheric transmittances from space to the given level are combined with local values of the total absorption coefficient and its components due to absorption of atmospheric constituents under study. This makes it possible to evaluate the temperature and mixing ratio weighting functions in parallel with evaluation of radiances. This substantially decreases the computer time required for evaluation of weighting functions. Implications for the nadir and limb viewing geometries are discussed.

  1. Anisotropic scattering of discrete particle arrays.

    PubMed

    Paul, Joseph S; Fu, Wai Chong; Dokos, Socrates; Box, Michael

    2010-05-01

    Far-field intensities of light scattered from a linear centro-symmetric array illuminated by a plane wave of incident light are estimated at a series of detector angles. The intensities are computed from the superposition of E-fields scattered by the individual array elements. An average scattering phase function is used to model the scattered fields of individual array elements. The nature of scattering from the array is investigated using an image (theta-phi plot) of the far-field intensities computed at a series of locations obtained by rotating the detector angle from 0 degrees to 360 degrees, corresponding to each angle of incidence in the interval [0 degrees 360 degrees]. The diffraction patterns observed from the theta-Phi plot are compared with those for isotropic scattering. In the absence of prior information on the array geometry, the intensities corresponding to theta-Phi pairs satisfying the Bragg condition are used to estimate the phase function. An algorithmic procedure is presented for this purpose and tested using synthetic data. The relative error between estimated and theoretical values of the phase function is shown to be determined by the mean spacing factor, the number of elements, and the far-field distance. An empirical relationship is presented to calculate the optimal far-field distance for a given specification of the percentage error.

  2. Intensive Versus Distributed Aphasia Therapy: A Nonrandomized, Parallel-Group, Dosage-Controlled Study.

    PubMed

    Dignam, Jade; Copland, David; McKinnon, Eril; Burfein, Penni; O'Brien, Kate; Farrell, Anna; Rodriguez, Amy D

    2015-08-01

    Most studies comparing different levels of aphasia treatment intensity have not controlled the dosage of therapy provided. Consequently, the true effect of treatment intensity in aphasia rehabilitation remains unknown. Aphasia Language Impairment and Functioning Therapy is an intensive, comprehensive aphasia program. We investigated the efficacy of a dosage-controlled trial of Aphasia Language Impairment and Functioning Therapy, when delivered in an intensive versus distributed therapy schedule, on communication outcomes in participants with chronic aphasia. Thirty-four adults with chronic, poststroke aphasia were recruited to participate in an intensive (n=16; 16 hours per week; 3 weeks) versus distributed (n=18; 6 hours per week; 8 weeks) therapy program. Treatment included 48 hours of impairment, functional, computer, and group-based aphasia therapy. Distributed therapy resulted in significantly greater improvements on the Boston Naming Test when compared with intensive therapy immediately post therapy (P=0.04) and at 1-month follow-up (P=0.002). We found comparable gains on measures of participants' communicative effectiveness, communication confidence, and communication-related quality of life for the intensive and distributed treatment conditions at post-therapy and 1-month follow-up. Aphasia Language Impairment and Functioning Therapy resulted in superior clinical outcomes on measures of language impairment when delivered in a distributed versus intensive schedule. The therapy progam had a positive effect on participants' functional communication and communication-related quality of life, regardless of treatment intensity. These findings contribute to our understanding of the effect of treatment intensity in aphasia rehabilitation and have important clinical implications for service delivery models. © 2015 American Heart Association, Inc.

  3. Analytic computation of energy derivatives - Relationships among partial derivatives of a variationally determined function

    NASA Technical Reports Server (NTRS)

    King, H. F.; Komornicki, A.

    1986-01-01

    Formulas are presented relating Taylor series expansion coefficients of three functions of several variables, the energy of the trial wave function (W), the energy computed using the optimized variational wave function (E), and the response function (lambda), under certain conditions. Partial derivatives of lambda are obtained through solution of a recursive system of linear equations, and solution through order n yields derivatives of E through order 2n + 1, extending Puley's application of Wigner's 2n + 1 rule to partial derivatives in couple perturbation theory. An examination of numerical accuracy shows that the usual two-term second derivative formula is less stable than an alternative four-term formula, and that previous claims that energy derivatives are stationary properties of the wave function are fallacious. The results have application to quantum theoretical methods for the computation of derivative properties such as infrared frequencies and intensities.

  4. Generation of Non-Homogeneous Poisson Processes by Thinning: Programming Considerations and Comparision with Competing Algorithms.

    DTIC Science & Technology

    1978-12-01

    Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution

  5. Characterization of solar cells for space applications. Volume 5: Electrical characteristics of OCLI 225-micron MLAR wraparound cells as a function of intensity, temperature, and irradiation

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Miyahira, T. F.; Weiss, R. S.

    1979-01-01

    Computed statistical averages and standard deviations with respect to the measured cells for each intensity temperature measurement condition are presented. Display averages and standard deviations of the cell characteristics in a two dimensional array format are shown: one dimension representing incoming light intensity, and another, the cell temperature. Programs for calculating the temperature coefficients of the pertinent cell electrical parameters are presented, and postirradiation data are summarized.

  6. Paranoia.Ada: A diagnostic program to evaluate Ada floating-point arithmetic

    NASA Technical Reports Server (NTRS)

    Hjermstad, Chris

    1986-01-01

    Many essential software functions in the mission critical computer resource application domain depend on floating point arithmetic. Numerically intensive functions associated with the Space Station project, such as emphemeris generation or the implementation of Kalman filters, are likely to employ the floating point facilities of Ada. Paranoia.Ada appears to be a valuabe program to insure that Ada environments and their underlying hardware exhibit the precision and correctness required to satisfy mission computational requirements. As a diagnostic tool, Paranoia.Ada reveals many essential characteristics of an Ada floating point implementation. Equipped with such knowledge, programmers need not tremble before the complex task of floating point computation.

  7. Mechanistic experimental pain assessment in computer users with and without chronic musculoskeletal pain.

    PubMed

    Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars

    2014-12-06

    Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.

  8. Structural study, NCA, FT-IR, FT-Raman spectral investigations, NBO analysis, thermodynamic functions of N-acetyl-l-phenylalanine.

    PubMed

    Raja, B; Balachandran, V; Revathi, B

    2015-03-05

    The FT-IR and FT-Raman spectra of N-acetyl-l-phenylalanine were recorded and analyzed. Natural bond orbital analysis has been carried out for various intramolecular interactions that are responsible for the stabilization of the molecule. HOMO-LUMO energy gap has been computed with the help of density functional theory. The statistical thermodynamic functions (heat capacity, entropy, vibrational partition function and Gibbs energy) were obtained for the range of temperature 100-1000K. The polarizability, first hyperpolarizability, anisotropy polarizability invariant has been computed using quantum chemical calculations. The infrared and Raman spectra were also predicted from the calculated intensities. Comparison of the experimental and theoretical spectra values provides important information about the ability of the computational method to describe the vibrational modes. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  10. Development of computer games for assessment and training in post-stroke arm telerehabilitation.

    PubMed

    Rodriguez-de-Pablo, Cristina; Perry, Joel C; Cavallaro, Francesca I; Zabaleta, Haritz; Keller, Thierry

    2012-01-01

    Stroke is the leading cause of long term disability among adults in industrialized nations. The majority of these disabilities include deficiencies in arm function, which can make independent living very difficult. Research shows that better results in rehabilitation are obtained when patients receive more intensive therapy. However this intensive therapy is currently too expensive to be provided by the public health system, and at home few patients perform the repetitive exercises recommended by their therapists. Computer games can provide an affordable, enjoyable, and effective way to intensify treatment, while keeping the patient as well as their therapists informed about their progress. This paper presents the study, design, implementation and user-testing of a set of computer games for at-home assessment and training of upper-limb motor impairment after stroke.

  11. Statistics of some atmospheric turbulence records relevant to aircraft response calculations

    NASA Technical Reports Server (NTRS)

    Mark, W. D.; Fischer, R. W.

    1981-01-01

    Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.

  12. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  13. Wind profiling based on the optical beam intensity statistics in a turbulent atmosphere.

    PubMed

    Banakh, Victor A; Marakasov, Dimitrii A

    2007-10-01

    Reconstruction of the wind profile from the statistics of intensity fluctuations of an optical beam propagating in a turbulent atmosphere is considered. The equations for the spatiotemporal correlation function and the spectrum of weak intensity fluctuations of a Gaussian beam are obtained. The algorithms of wind profile retrieval from the spatiotemporal intensity spectrum are described and the results of end-to-end computer experiments on wind profiling based on the developed algorithms are presented. It is shown that the developed algorithms allow retrieval of the wind profile from the turbulent optical beam intensity fluctuations with acceptable accuracy in many practically feasible laser measurements set up in the atmosphere.

  14. Robust active contour via additive local and global intensity information based on local entropy

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai; Monkam, Patrice; Zhang, Feng; Luan, Fangjun; Koomson, Ben Alfred

    2018-01-01

    Active contour-based image segmentation can be a very challenging task due to many factors such as high intensity inhomogeneity, presence of noise, complex shape, weak boundaries objects, and dependence on the position of the initial contour. We propose a level set-based active contour method to segment complex shape objects from images corrupted by noise and high intensity inhomogeneity. The energy function of the proposed method results from combining the global intensity information and local intensity information with some regularization factors. First, the global intensity term is proposed based on a scheme formulation that considers two intensity values for each region instead of one, which outperforms the well-known Chan-Vese model in delineating the image information. Second, the local intensity term is formulated based on local entropy computed considering the distribution of the image brightness and using the generalized Gaussian distribution as the kernel function. Therefore, it can accurately handle high intensity inhomogeneity and noise. Moreover, our model is not dependent on the position occupied by the initial curve. Finally, extensive experiments using various images have been carried out to illustrate the performance of the proposed method.

  15. Computer Simulation for Calculating the Second-Order Correlation Function of Classical and Quantum Light

    ERIC Educational Resources Information Center

    Facao, M.; Lopes, A.; Silva, A. L.; Silva, P.

    2011-01-01

    We propose an undergraduate numerical project for simulating the results of the second-order correlation function as obtained by an intensity interference experiment for two kinds of light, namely bunched light with Gaussian or Lorentzian power density spectrum and antibunched light obtained from single-photon sources. While the algorithm for…

  16. QUEST - A Bayesian adaptive psychometric method

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Pelli, D. G.

    1983-01-01

    An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.

  17. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  18. Exact analytical formulae for linearly distributed vortex and source sheets in uence computation in 2D vortex methods

    NASA Astrophysics Data System (ADS)

    Kuzmina, K. S.; Marchevsky, I. K.; Ryatina, E. P.

    2017-11-01

    We consider the methodology of numerical schemes development for two-dimensional vortex method. We describe two different approaches to deriving integral equation for unknown vortex sheet intensity. We simulate the velocity of the surface line of an airfoil as the influence of attached vortex and source sheets. We consider a polygonal approximation of the airfoil and assume intensity distributions of free and attached vortex sheets and attached source sheet to be approximated with piecewise constant or piecewise linear (continuous or discontinuous) functions. We describe several specific numerical schemes that provide different accuracy and have a different computational cost. The study shows that a Galerkin-type approach to solving boundary integral equation requires computing several integrals and double integrals over the panels. We obtain exact analytical formulae for all the necessary integrals, which makes it possible to raise significantly the accuracy of vortex sheet intensity computation and improve the quality of velocity and vorticity field representation, especially in proximity to the surface line of the airfoil. All the formulae are written down in the invariant form and depend only on the geometric relationship between the positions of the beginnings and ends of the panels.

  19. Intensity-based masking: A tool to improve functional connectivity results of resting-state fMRI.

    PubMed

    Peer, Michael; Abboud, Sami; Hertz, Uri; Amedi, Amir; Arzy, Shahar

    2016-07-01

    Seed-based functional connectivity (FC) of resting-state functional MRI data is a widely used methodology, enabling the identification of functional brain networks in health and disease. Based on signal correlations across the brain, FC measures are highly sensitive to noise. A somewhat neglected source of noise is the fMRI signal attenuation found in cortical regions in close vicinity to sinuses and air cavities, mainly in the orbitofrontal, anterior frontal and inferior temporal cortices. BOLD signal recorded at these regions suffers from dropout due to susceptibility artifacts, resulting in an attenuated signal with reduced signal-to-noise ratio in as many as 10% of cortical voxels. Nevertheless, signal attenuation is largely overlooked during FC analysis. Here we first demonstrate that signal attenuation can significantly influence FC measures by introducing false functional correlations and diminishing existing correlations between brain regions. We then propose a method for the detection and removal of the attenuated signal ("intensity-based masking") by fitting a Gaussian-based model to the signal intensity distribution and calculating an intensity threshold tailored per subject. Finally, we apply our method on real-world data, showing that it diminishes false correlations caused by signal dropout, and significantly improves the ability to detect functional networks in single subjects. Furthermore, we show that our method increases inter-subject similarity in FC, enabling reliable distinction of different functional networks. We propose to include the intensity-based masking method as a common practice in the pre-processing of seed-based functional connectivity analysis, and provide software tools for the computation of intensity-based masks on fMRI data. Hum Brain Mapp 37:2407-2418, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. The effect of El Chichon on the intensity and polarization of skylight at Mauna Loa

    NASA Technical Reports Server (NTRS)

    King, M. D.; Fraser, R. S.

    1983-01-01

    An empirical model of the stratospheric aerosol over Mauna Loa Observatory (MLO) has been developed, in order to study the effect of aerosol particles from the eruption of the El Chichon volcano on the intensity and degree of skylight polarization. The modeling computations were based on measurements of monthly mean optical thickness for July 1982, together with lidar measurements of the vertical distribution of aerosols. On the basis of the theoretical computations, it is shown that the number and location of polarization neutral points, and the location and magnitude of the peak polarization were both functions of the relatively narrow distribution of aerosol particle size near 0.4 microns.

  1. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  2. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  3. Data communication network at the ASRM facility

    NASA Astrophysics Data System (ADS)

    Moorhead, Robert J., II; Smith, Wayne D.

    1993-08-01

    This report describes the simulation of the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi as of today. The report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing intensive and manufacturing non-intensive. The manufacturing intensive buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B_1000. The manufacturing non-intensive buildings will be connected by 10BASE-FL to the OIS through the Business Information System (BIS) hub in the main computing center. All the devices inside B_1000 will communicate with the BIS. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing intensive hub and one of the OIS hubs. Comdisco's Block Oriented Network Simulator (BONeS) has been used to simulate the performance of the network. BONeS models a network topology, traffic, data structures, and protocol functions using a graphical interface. The main aim of the simulations was to evaluate the loading of the OIS, the BIS, and the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site.

  4. Data communication network at the ASRM facility

    NASA Technical Reports Server (NTRS)

    Moorhead, Robert J., II; Smith, Wayne D.

    1993-01-01

    This report describes the simulation of the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi as of today. The report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing intensive and manufacturing non-intensive. The manufacturing intensive buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B_1000. The manufacturing non-intensive buildings will be connected by 10BASE-FL to the OIS through the Business Information System (BIS) hub in the main computing center. All the devices inside B_1000 will communicate with the BIS. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing intensive hub and one of the OIS hubs. Comdisco's Block Oriented Network Simulator (BONeS) has been used to simulate the performance of the network. BONeS models a network topology, traffic, data structures, and protocol functions using a graphical interface. The main aim of the simulations was to evaluate the loading of the OIS, the BIS, and the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site.

  5. Probability density function of the intensity of a laser beam propagating in the maritime environment.

    PubMed

    Korotkova, Olga; Avramov-Zamurovic, Svetlana; Malek-Madani, Reza; Nelson, Charles

    2011-10-10

    A number of field experiments measuring the fluctuating intensity of a laser beam propagating along horizontal paths in the maritime environment is performed over sub-kilometer distances at the United States Naval Academy. Both above the ground and over the water links are explored. Two different detection schemes, one photographing the beam on a white board, and the other capturing the beam directly using a ccd sensor, gave consistent results. The probability density function (pdf) of the fluctuating intensity is reconstructed with the help of two theoretical models: the Gamma-Gamma and the Gamma-Laguerre, and compared with the intensity's histograms. It is found that the on-ground experimental results are in good agreement with theoretical predictions. The results obtained above the water paths lead to appreciable discrepancies, especially in the case of the Gamma-Gamma model. These discrepancies are attributed to the presence of the various scatterers along the path of the beam, such as water droplets, aerosols and other airborne particles. Our paper's main contribution is providing a methodology for computing the pdf function of the laser beam intensity in the maritime environment using field measurements.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    De La Pierre, Marco, E-mail: cedric.carteret@univ-lorraine.fr, E-mail: marco.delapierre@unito.it; Maschio, Lorenzo; Orlando, Roberto

    Powder and single crystal Raman spectra of the two most common phases of calcium carbonate are calculated with ab initio techniques (using a “hybrid” functional and a Gaussian-type basis set) and measured both at 80 K and room temperature. Frequencies of the Raman modes are in very good agreement between calculations and experiments: the mean absolute deviation at 80 K is 4 and 8 cm{sup −1} for calcite and aragonite, respectively. As regards intensities, the agreement is in general good, although the computed values overestimate the measured ones in many cases. The combined analysis permits to identify almost all themore » fundamental experimental Raman peaks of the two compounds, with the exception of either modes with zero computed intensity or modes overlapping with more intense peaks. Additional peaks have been identified in both calcite and aragonite, which have been assigned to {sup 18}O satellite modes or overtones. The agreement between the computed and measured spectra is quite satisfactory; in particular, simulation permits to clearly distinguish between calcite and aragonite in the case of powder spectra, and among different polarization directions of each compound in the case of single crystal spectra.« less

  7. Algal culture studies for CELSS

    NASA Technical Reports Server (NTRS)

    Radmer, R.; Behrens, P.; Arnett, K.; Gladue, R.; Cox, J.; Lieberman, D.

    1987-01-01

    Microalgae are well-suited as a component of a Closed Environmental Life Support System (CELSS), since they can couple the closely related functions of food production and atmospheric regeneration. The objective was to provide a basis for predicting the response of CELSS algal cultures, and thus the food supply and air regeneration system, to changes in the culture parameters. Scenedesmus growth was measured as a function of light intensity, and the spectral dependence of light absorption by the algae as well as algal respiration in the light were determined as a function of cell concentration. These results were used to test and confirm a mathematical model that describes the productivity of an algal culture in terms of the competing processes of photosynthesis and respiration. The relationship of algal productivity to cell concentration was determined at different carbon dioxide concentrations, temperatures, and light intensities. The maximum productivity achieved by an air-grown culture was found to be within 10% of the computed maximum productivity, indicating that CO2 was very efficiently removed from the gas stream by the algal culture. Measurements of biomass productivity as a function of cell concentration at different light intensities indicated that both the productivity and efficiency of light utilization were greater at higher light intensities.

  8. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE PAGES

    Tencer, John; Carlberg, Kevin; Larsen, Marvin; ...

    2017-06-17

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  9. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin; Larsen, Marvin

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  10. An image morphing technique based on optimal mass preserving mapping.

    PubMed

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2007-06-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.

  11. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    PubMed Central

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  12. Computational physics of the mind

    NASA Astrophysics Data System (ADS)

    Duch, Włodzisław

    1996-08-01

    In the XIX century and earlier physicists such as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of the mind. In this paper several approaches relevant to modeling of the mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From a computational point of view realistic models require massively parallel architectures.

  13. Bootstrap Methods: A Very Leisurely Look.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Winstead, Wayland H.

    The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…

  14. Experimental determination of the 1 Sigma(+) state electric dipole moment function of carbon monoxide up to a large internuclear separation

    NASA Technical Reports Server (NTRS)

    Chackerian, C., Jr.; Farrenq, R.; Guelachvili, G.; Rossetti, C.; Urban, W.

    1984-01-01

    Experimental intensity information is combined with numerically obtained vibrational wave functions in a nonlinear least-squares fitting procedure to obtain the ground electronic state electric dipole moment function of carbon monoxide valid in the range of nuclear oscillation (0.87-1.91 A) of about the V = 38th vibrational level. Vibrational transition matrix elements are computed from this function for Delta V = 1, 2, 3 with V not more than 38.

  15. High performance hybrid functional Petri net simulations of biological pathway models on CUDA.

    PubMed

    Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.

  16. nMoldyn: a program package for a neutron scattering oriented analysis of molecular dynamics simulations.

    PubMed

    Róg, T; Murzyn, K; Hinsen, K; Kneller, G R

    2003-04-15

    We present a new implementation of the program nMoldyn, which has been developed for the computation and decomposition of neutron scattering intensities from Molecular Dynamics trajectories (Comp. Phys. Commun 1995, 91, 191-214). The new implementation extends the functionality of the original version, provides a much more convenient user interface (both graphical/interactive and batch), and can be used as a tool set for implementing new analysis modules. This was made possible by the use of a high-level language, Python, and of modern object-oriented programming techniques. The quantities that can be calculated by nMoldyn are the mean-square displacement, the velocity autocorrelation function as well as its Fourier transform (the density of states) and its memory function, the angular velocity autocorrelation function and its Fourier transform, the reorientational correlation function, and several functions specific to neutron scattering: the coherent and incoherent intermediate scattering functions with their Fourier transforms, the memory function of the coherent scattering function, and the elastic incoherent structure factor. The possibility to compute memory function is a new and powerful feature that allows to relate simulation results to theoretical studies. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 657-667, 2003

  17. Early intensive hand rehabilitation after spinal cord injury ("Hands On"): a protocol for a randomised controlled trial.

    PubMed

    Harvey, Lisa A; Dunlop, Sarah A; Churilov, Leonid; Hsueh, Ya-Seng Arthur; Galea, Mary P

    2011-01-17

    Loss of hand function is one of the most devastating consequences of spinal cord injury. Intensive hand training provided on an instrumented exercise workstation in conjunction with functional electrical stimulation may enhance neural recovery and hand function. The aim of this trial is to compare usual care with an 8-week program of intensive hand training and functional electrical stimulation. A multicentre randomised controlled trial will be undertaken. Seventy-eight participants with recent tetraplegia (C2 to T1 motor complete or incomplete) undergoing inpatient rehabilitation will be recruited from seven spinal cord injury units in Australia and New Zealand and will be randomised to a control or experimental group. Control participants will receive usual care. Experimental participants will receive usual care and an 8-week program of intensive unilateral hand training using an instrumented exercise workstation and functional electrical stimulation. Participants will drive the functional electrical stimulation of their target hands via a behind-the-ear bluetooth device, which is sensitive to tooth clicks. The bluetooth device will enable the use of various manipulanda to practice functional activities embedded within computer-based games and activities. Training will be provided for one hour, 5 days per week, during the 8-week intervention period. The primary outcome is the Action Research Arm Test. Secondary outcomes include measurements of strength, sensation, function, quality of life and cost effectiveness. All outcomes will be taken at baseline, 8 weeks, 6 months and 12 months by assessors blinded to group allocation. Recruitment commenced in December 2009. The results of this trial will determine the effectiveness of an 8-week program of intensive hand training with functional electrical stimulation. NCT01086930 (12th March 2010)ACTRN12609000695202 (12th August 2009).

  18. pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2014-01-01

    This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.

  19. Computing moment to moment BOLD activation for real-time neurofeedback

    PubMed Central

    Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.

    2013-01-01

    Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350

  20. Effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders among computer workers: a randomized controlled trial.

    PubMed

    Esmaeilzadeh, Sina; Ozcan, Emel; Capan, Nalan

    2014-01-01

    The aim of the study was to determine effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders (WUEMSDs) among computer workers. Four hundred computer workers answered a questionnaire on work-related upper extremity musculoskeletal symptoms (WUEMSS). Ninety-four subjects with WUEMSS using computers at least 3 h a day participated in a prospective, randomized controlled 6-month intervention. Body posture and workstation layouts were assessed by the Ergonomic Questionnaire. We used the Visual Analogue Scale to assess the intensity of WUEMSS. The Upper Extremity Function Scale was used to evaluate functional limitations at the neck and upper extremities. Health-related quality of life was assessed with the Short Form-36. After baseline assessment, those in the intervention group participated in a multicomponent ergonomic intervention program including a comprehensive ergonomic training consisting of two interactive sessions, an ergonomic training brochure, and workplace visits with workstation adjustments. Follow-up assessment was conducted after 6 months. In the intervention group, body posture (p < 0.001) and workstation layout (p = 0.002) improved over 6 months; furthermore, intensity (p < 0.001), duration (p < 0.001), and frequency (p = 0.009) of WUEMSS decreased significantly in the intervention group compared with the control group. Additionally, the functional status (p = 0.001), and physical (p < 0.001), and mental (p = 0.035) health-related quality of life improved significantly compared with the controls. There was no improvement of work day loss due to WUEMSS (p > 0.05). Ergonomic intervention programs may be effective in reducing ergonomic risk factors among computer workers and consequently in the secondary prevention of WUEMSDs.

  1. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  2. Safety Metrics for Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  3. The estimation of pointing angle and normalized surface scattering cross section from GEOS-3 radar altimeter measurements

    NASA Technical Reports Server (NTRS)

    Brown, G. S.; Curry, W. J.

    1977-01-01

    The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.

  4. Computing the Evans function via solving a linear boundary value ODE

    NASA Astrophysics Data System (ADS)

    Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn

    2015-11-01

    Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.

  5. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  6. Force fields and scoring functions for carbohydrate simulation.

    PubMed

    Xiong, Xiuming; Chen, Zhaoqiang; Cossins, Benjamin P; Xu, Zhijian; Shao, Qiang; Ding, Kai; Zhu, Weiliang; Shi, Jiye

    2015-01-12

    Carbohydrate dynamics plays a vital role in many biological processes, but we are not currently able to probe this with experimental approaches. The highly flexible nature of carbohydrate structures differs in many aspects from other biomolecules, posing significant challenges for studies employing computational simulation. Over past decades, computational study of carbohydrates has been focused on the development of structure prediction methods, force field optimization, molecular dynamics simulation, and scoring functions for carbohydrate-protein interactions. Advances in carbohydrate force fields and scoring functions can be largely attributed to enhanced computational algorithms, application of quantum mechanics, and the increasing number of experimental structures determined by X-ray and NMR techniques. The conformational analysis of carbohydrates is challengeable and has gone into intensive study in elucidating the anomeric, the exo-anomeric, and the gauche effects. Here, we review the issues associated with carbohydrate force fields and scoring functions, which will have a broad application in the field of carbohydrate-based drug design. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. The effect of basis set and exchange-correlation functional on time-dependent density functional theory calculations within the Tamm-Dancoff approximation of the x-ray emission spectroscopy of transition metal complexes.

    PubMed

    Roper, Ian P E; Besley, Nicholas A

    2016-03-21

    The simulation of X-ray emission spectra of transition metal complexes with time-dependent density functional theory (TDDFT) is investigated. X-ray emission spectra can be computed within TDDFT in conjunction with the Tamm-Dancoff approximation by using a reference determinant with a vacancy in the relevant core orbital, and these calculations can be performed using the frozen orbital approximation or with the relaxation of the orbitals of the intermediate core-ionised state included. Both standard exchange-correlation functionals and functionals specifically designed for X-ray emission spectroscopy are studied, and it is shown that the computed spectral band profiles are sensitive to the exchange-correlation functional used. The computed intensities of the spectral bands can be rationalised by considering the metal p orbital character of the valence molecular orbitals. To compute X-ray emission spectra with the correct energy scale allowing a direct comparison with experiment requires the relaxation of the core-ionised state to be included and the use of specifically designed functionals with increased amounts of Hartree-Fock exchange in conjunction with high quality basis sets. A range-corrected functional with increased Hartree-Fock exchange in the short range provides transition energies close to experiment and spectral band profiles that have a similar accuracy to those from standard functionals.

  8. Lasers Induced Damage in Optical Materials: 1985. Proceedings of the Symposium on Optical Materials for High-Power Lasers (17th) Held in Boulder, Colorado on October 28-30, 1985

    DTIC Science & Technology

    1988-07-01

    optical coatings.[lj In * single and multilayer anatase TiO 2 coatings, sufficiently intense pulsed laser irradiation at 532 nm led to observation of...temperatures of pulsed laser - irradiated anatase coatings have been computed from Stokes/anti-Stokes band intensity ratios at zero time delay as a function of...Adar Time-Resolved Temperature Determinations from Raman Scattering of TiO𔃼 Coatings During Pulsed Laser Irradiation

  9. Conformal correlation functions in the Brownian loop soup

    NASA Astrophysics Data System (ADS)

    Camia, Federico; Gandolfi, Alberto; Kleban, Matthew

    2016-01-01

    We define and study a set of operators that compute statistical properties of the Brownian loop soup, a conformally invariant gas of random Brownian loops (Brownian paths constrained to begin and end at the same point) in two dimensions. We prove that the correlation functions of these operators have many of the properties of conformal primaries in a conformal field theory, and compute their conformal dimension. The dimensions are real and positive, but have the novel feature that they vary continuously as a periodic function of a real parameter. We comment on the relation of the Brownian loop soup to the free field, and use this relation to establish that the central charge of the loop soup is twice its intensity.

  10. Characterization of amine-functionalized electrode for aqueous carbon dioxide (CO2) direct detection

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi

    2017-03-01

    In this study, fabrication of amino groups and ferrocenes co-modified sensor electrode and electrochemical detection of carbon dioxide (CO2) in the saline solution is reported. Electrochemical detection of CO2 was carried out using cyclic voltammetry in saline solution containing sodium bicarbonate as CO2 source. Oxidation and reduction peak current intensities computed from cyclic voltammograms varied as a function of concentration of CO2 molecules. The calibration curve was obtained by plotting oxidation peak current intensities as a function of CO2 concentration. The sensor electrode prepared in this study can estimate the differences between concentrations of CO2 in normal seawater up to 10 times higher. Furthermore, the surface analysis was performed to clarify the CO2 detection mechanism.

  11. Electron beam-generated Ar/N{sub 2} plasmas: The effect of nitrogen addition on the brightest argon emission lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lock, E. H., E-mail: evgeniya.lock@nrl.navy.mil, E-mail: scott.walton@nrl.navy.mil; Petrova, Tz. B.; Petrov, G. M.

    2016-04-15

    The effect of nitrogen addition on the emission intensities of the brightest argon lines produced in a low pressure argon/nitrogen electron beam-generated plasmas is characterized using optical emission spectroscopy. In particular, a decrease in the intensities of the 811.5 nm and 763.5 nm lines is observed, while the intensity of the 750.4 nm line remains unchanged as nitrogen is added. To explain this phenomenon, a non-equilibrium collisional-radiative model is developed and used to compute the population of argon excited states and line intensities as a function of gas composition. The results show that the addition of nitrogen to argon modifies the electron energymore » distribution function, reduces the electron temperature, and depopulates Ar metastables in exchange reactions with electrons and N{sub 2} molecules, all of which lead to changes in argon excited states population and thus the emission originating from the Ar 4p levels.« less

  12. Binary-selectable detector holdoff circuit

    NASA Technical Reports Server (NTRS)

    Kadrmas, K. A.

    1974-01-01

    High-speed switching circuit protects detectors from sudden, extremely-intense backscattered radiation that results from short-range atmospheric dust layers, or low-level clouds, entering laser/radar field of view. Function of circuit is to provide computer-controlled switching of photodiode detector, preamplifier power-supply voltages, in approximately 10 nanoseconds.

  13. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, D. P.

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.

  15. GPU and APU computations of Finite Time Lyapunov Exponent fields

    NASA Astrophysics Data System (ADS)

    Conti, Christian; Rossinelli, Diego; Koumoutsakos, Petros

    2012-03-01

    We present GPU and APU accelerated computations of Finite-Time Lyapunov Exponent (FTLE) fields. The calculation of FTLEs is a computationally intensive process, as in order to obtain the sharp ridges associated with the Lagrangian Coherent Structures an extensive resampling of the flow field is required. The computational performance of this resampling is limited by the memory bandwidth of the underlying computer architecture. The present technique harnesses data-parallel execution of many-core architectures and relies on fast and accurate evaluations of moment conserving functions for the mesh to particle interpolations. We demonstrate how the computation of FTLEs can be efficiently performed on a GPU and on an APU through OpenCL and we report over one order of magnitude improvements over multi-threaded executions in FTLE computations of bluff body flows.

  16. Tensor voting for image correction by global and local intensity alignment.

    PubMed

    Jia, Jiaya; Tang, Chi-Keung

    2005-01-01

    This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.

  17. The Chinese Facial Emotion Recognition Database (CFERD): a computer-generated 3-D paradigm to measure the recognition of facial emotional expressions at different intensities.

    PubMed

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2012-12-30

    The Chinese Facial Emotion Recognition Database (CFERD), a computer-generated three-dimensional (3D) paradigm, was developed to measure the recognition of facial emotional expressions at different intensities. The stimuli consisted of 3D colour photographic images of six basic facial emotional expressions (happiness, sadness, disgust, fear, anger and surprise) and neutral faces of the Chinese. The purpose of the present study is to describe the development and validation of CFERD with nonclinical healthy participants (N=100; 50 men; age ranging between 18 and 50 years), and to generate normative data set. The results showed that the sensitivity index d' [d'=Z(hit rate)-Z(false alarm rate), where function Z(p), p∈[0,1

  18. Discovering and understanding oncogenic gene fusions through data intensive computational approaches

    PubMed Central

    Latysheva, Natasha S.; Babu, M. Madan

    2016-01-01

    Abstract Although gene fusions have been recognized as important drivers of cancer for decades, our understanding of the prevalence and function of gene fusions has been revolutionized by the rise of next-generation sequencing, advances in bioinformatics theory and an increasing capacity for large-scale computational biology. The computational work on gene fusions has been vastly diverse, and the present state of the literature is fragmented. It will be fruitful to merge three camps of gene fusion bioinformatics that appear to rarely cross over: (i) data-intensive computational work characterizing the molecular biology of gene fusions; (ii) development research on fusion detection tools, candidate fusion prioritization algorithms and dedicated fusion databases and (iii) clinical research that seeks to either therapeutically target fusion transcripts and proteins or leverages advances in detection tools to perform large-scale surveys of gene fusion landscapes in specific cancer types. In this review, we unify these different—yet highly complementary and symbiotic—approaches with the view that increased synergy will catalyze advancements in gene fusion identification, characterization and significance evaluation. PMID:27105842

  19. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  20. Linear Array Ambient Noise Adjoint Tomography Reveals Intense Crust-Mantle Interactions in North China Craton

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Yao, Huajian; Liu, Qinya; Zhang, Ping; Yuan, Yanhua O.; Feng, Jikun; Fang, Lihua

    2018-01-01

    We present a 2-D ambient noise adjoint tomography technique for a linear array with a significant reduction in computational cost and show its application to an array in North China. We first convert the observed data for 3-D media, i.e., surface-wave empirical Green's functions (EGFs) to the reconstructed EGFs (REGFs) for 2-D media using a 3-D/2-D transformation scheme. Different from the conventional steps of measuring phase dispersion, this technology refines 2-D shear wave speeds along the profile directly from REGFs. With an initial model based on traditional ambient noise tomography, adjoint tomography updates the model by minimizing the frequency-dependent Rayleigh wave traveltime delays between the REGFs and synthetic Green functions calculated by the spectral-element method. The multitaper traveltime difference measurement is applied in four-period bands: 20-35 s, 15-30 s, 10-20 s, and 6-15 s. The recovered model shows detailed crustal structures including pronounced low-velocity anomalies in the lower crust and a gradual crust-mantle transition zone beneath the northern Trans-North China Orogen, which suggest the possible intense thermo-chemical interactions between mantle-derived upwelling melts and the lower crust, probably associated with the magmatic underplating during the Mesozoic to Cenozoic evolution of this region. To our knowledge, it is the first time that ambient noise adjoint tomography is implemented for a 2-D medium. Compared with the intensive computational cost and storage requirement of 3-D adjoint tomography, this method offers a computationally efficient and inexpensive alternative to imaging fine-scale crustal structures beneath linear arrays.

  1. Far field and wavefront characterization of a high-power semiconductor laser for free space optical communications

    NASA Technical Reports Server (NTRS)

    Cornwell, Donald M., Jr.; Saif, Babak N.

    1991-01-01

    The spatial pointing angle and far field beamwidth of a high-power semiconductor laser are characterized as a function of CW power and also as a function of temperature. The time-averaged spatial pointing angle and spatial lobe width were measured under intensity-modulated conditions. The measured pointing deviations are determined to be well within the pointing requirements of the NASA Laser Communications Transceiver (LCT) program. A computer-controlled Mach-Zehnder phase-shifter interferometer is used to characterize the wavefront quality of the laser. The rms phase error over the entire pupil was measured as a function of CW output power. Time-averaged measurements of the wavefront quality are also made under intensity-modulated conditions. The measured rms phase errors are determined to be well within the wavefront quality requirements of the LCT program.

  2. Development of a probabilistic analysis methodology for structural reliability estimation

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.

    1991-01-01

    The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.

  3. EON: software for long time simulations of atomic scale systems

    NASA Astrophysics Data System (ADS)

    Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme

    2014-07-01

    The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.

  4. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

    NASA Astrophysics Data System (ADS)

    Sarojkumar, K.; Krishna, S.

    2016-08-01

    Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

  5. Optimizing a mobile robot control system using GPU acceleration

    NASA Astrophysics Data System (ADS)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  6. GPU accelerated dynamic functional connectivity analysis for functional MRI data.

    PubMed

    Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu

    2015-07-01

    Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Neutron scattering, solid state NMR and quantum chemistry studies of 11-keto-progesterone

    NASA Astrophysics Data System (ADS)

    Szyczewski, A.; Hołderna-Natkaniec, K.; Natkaniec, I.

    2004-07-01

    The molecule geometry, frequency and intensity of the IINS and IR vibrational bands of 11-ketoprogesterone have been obtained by the HF, PM3 and density functional theory (DFT) with the B3LYP functionals and 6-31G(d,p) basis set. The optimised bond lengths and bond angles of the steroid skeleton are in good agreement with the X-ray data. The IR and IINS spectra of ketoprogesterone, computed at the DFT level, well reproduce the vibrational wavenumbers and intensities to an accuracy allowing reliable vibrational assignments. The molecular dynamic study by 1H NMR has confirmed the sequence of onset of reorientations of subsequent methyl groups indicated by the results of quantum chemistry calculations and INS spectra.

  8. Feasibility and tolerability of whole‐body, low‐intensity vibration and its effects on muscle function and bone in patients with dystrophinopathies: a pilot study

    PubMed Central

    Polgreen, Lynda E.; Grames, Molly; Lowe, Dawn A.; Hodges, James S.; Karachunski, Peter

    2017-01-01

    ABSTRACT Introduction Dystrophinopathies are X‐linked muscle degenerative disorders that result in progressive muscle weakness complicated by bone loss. This study's goal was to evaluate feasibility and tolerability of whole‐body, low‐intensity vibration (WBLIV) and its potential effects on muscle and bone in patients with Duchenne or Becker muscular dystrophy. Methods This 12‐month pilot study included 5 patients (age 5.9–21.7 years) who used a low‐intensity Marodyne LivMD plate vibrating at 30–90 Hz for 10 min/day for the first 6 months. Timed motor function tests, myometry, and peripheral quantitative computed tomography were performed at baseline and at 6 and 12 months. Results Motor function and lower extremity muscle strength remained either unchanged or improved during the intervention phase, followed by deterioration after WBLIV discontinuation. Indices of bone density and geometry remained stable in the tibia. Conclusions WBLIV was well tolerated and appeared to have a stabilizing effect on lower extremity muscle function and bone measures. Muscle Nerve 55: 875–883, 2017 PMID:27718512

  9. A Discrete Probability Function Method for the Equation of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.

  10. Computer-assisted learning in critical care: from ENIAC to HAL.

    PubMed

    Tegtmeyer, K; Ibsen, L; Goldstein, B

    2001-08-01

    Computers are commonly used to serve many functions in today's modern intensive care unit. One of the most intriguing and perhaps most challenging applications of computers has been to attempt to improve medical education. With the introduction of the first computer, medical educators began looking for ways to incorporate their use into the modern curriculum. Prior limitations of cost and complexity of computers have consistently decreased since their introduction, making it increasingly feasible to incorporate computers into medical education. Simultaneously, the capabilities and capacities of computers have increased. Combining the computer with other modern digital technology has allowed the development of more intricate and realistic educational tools. The purpose of this article is to briefly describe the history and use of computers in medical education with special reference to critical care medicine. In addition, we will examine the role of computers in teaching and learning and discuss the types of interaction between the computer user and the computer.

  11. Dose-response relationships using brain–computer interface technology impact stroke rehabilitation

    PubMed Central

    Young, Brittany M.; Nigogosyan, Zack; Walton, Léo M.; Remsik, Alexander; Song, Jie; Nair, Veena A.; Tyler, Mitchell E.; Edwards, Dorothy F.; Caldera, Kristin; Sattin, Justin A.; Williams, Justin C.; Prabhakaran, Vivek

    2015-01-01

    Brain–computer interfaces (BCIs) are an emerging novel technology for stroke rehabilitation. Little is known about how dose-response relationships for BCI therapies affect brain and behavior changes. We report preliminary results on stroke patients (n = 16, 11 M) with persistent upper extremity motor impairment who received therapy using a BCI system with functional electrical stimulation of the hand and tongue stimulation. We collected MRI scans and behavioral data using the Action Research Arm Test (ARAT), 9-Hole Peg Test (9-HPT), and Stroke Impact Scale (SIS) before, during, and after the therapy period. Using anatomical and functional MRI, we computed Laterality Index (LI) for brain activity in the motor network during impaired hand finger tapping. Changes from baseline LI and behavioral scores were assessed for relationships with dose, intensity, and frequency of BCI therapy. We found that gains in SIS Strength were directly responsive to BCI therapy: therapy dose and intensity correlated positively with increased SIS Strength (p ≤ 0.05), although no direct relationships were identified with ARAT or 9-HPT scores. We found behavioral measures that were not directly sensitive to differences in BCI therapy administration but were associated with concurrent brain changes correlated with BCI therapy administration parameters: therapy dose and intensity showed significant (p ≤ 0.05) or trending (0.05 < p < 0.1) negative correlations with LI changes, while therapy frequency did not affect LI. Reductions in LI were then correlated (p ≤ 0.05) with increased SIS Activities of Daily Living scores and improved 9-HPT performance. Therefore, some behavioral changes may be reflected by brain changes sensitive to differences in BCI therapy administration, while others such as SIS Strength may be directly responsive to BCI therapy administration. Data preliminarily suggest that when using BCI in stroke rehabilitation, therapy frequency may be less important than dose and intensity. PMID:26157378

  12. Rotor Noise due to Blade-Turbulence Interaction.

    NASA Astrophysics Data System (ADS)

    Ishimaru, Kiyoto

    The time-averaged intensity density function of the acoustic radiation from rotating blades is derived by replacing blades with rotating dipoles. This derivation is done under the following turbulent inflow conditions: turbulent ingestion with no inlet strut wakes, inflow turbulence elongation and contraction with no inlet strut wakes, and inlet strut wakes. Dimensional analysis reveals two non-dimensional parameters which play important roles in generating the blade-passing frequency tone and its multiples. The elongation and contraction of inflow turbulence has a strong effect on the generation of the blade-passing frequency tone and its multiples. Increasing the number of rotor blades widens the peak at the blade-passing frequency and its multiples. Increasing the rotational speed widens the peak under the condition that the non-dimensional parameter involving the rotational speed is fixed. The number of struts and blades should be chosen so that (the least common multiple of them)(.)(rotational speed) is in the cutoff range of Sears' function, in order to minimize the effect of the mean flow deficit on the time averaged intensity density function. The acoustic intensity density function is not necessarily stationary even if the inflow turbulence is homogeneous and isotropic. The time variation of the propagation path due to the rotation should be considered in the computation of the intensity density function; for instance, in the present rotor specification, the rotor radius is about 0.3 m and the rotational speed Mach number is about 0.2.

  13. Computational code in atomic and nuclear quantum optics: Advanced computing multiphoton resonance parameters for atoms in a strong laser field

    NASA Astrophysics Data System (ADS)

    Glushkov, A. V.; Gurskaya, M. Yu; Ignatenko, A. V.; Smirnov, A. V.; Serga, I. N.; Svinarenko, A. A.; Ternovsky, E. V.

    2017-10-01

    The consistent relativistic energy approach to the finite Fermi-systems (atoms and nuclei) in a strong realistic laser field is presented and applied to computing the multiphoton resonances parameters in some atoms and nuclei. The approach is based on the Gell-Mann and Low S-matrix formalism, multiphoton resonance lines moments technique and advanced Ivanov-Ivanova algorithm of calculating the Green’s function of the Dirac equation. The data for multiphoton resonance width and shift for the Cs atom and the 57Fe nucleus in dependence upon the laser intensity are listed.

  14. The Dynamo package for tomography and subtomogram averaging: components for MATLAB, GPU computing and EC2 Amazon Web Services

    PubMed Central

    Castaño-Díez, Daniel

    2017-01-01

    Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance. PMID:28580909

  15. The Dynamo package for tomography and subtomogram averaging: components for MATLAB, GPU computing and EC2 Amazon Web Services.

    PubMed

    Castaño-Díez, Daniel

    2017-06-01

    Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance.

  16. Neural mechanisms of planning: A computational analysis using event-related fMRI

    PubMed Central

    Fincham, Jon M.; Carter, Cameron S.; van Veen, Vincent; Stenger, V. Andrew; Anderson, John R.

    2002-01-01

    To investigate the neural mechanisms of planning, we used a novel adaptation of the Tower of Hanoi (TOH) task and event-related functional MRI. Participants were trained in applying a specific strategy to an isomorph of the five-disk TOH task. After training, participants solved novel problems during event-related functional MRI. A computational cognitive model of the task was used to generate a reference time series representing the expected blood oxygen level-dependent response in brain areas involved in the manipulation and planning of goals. This time series was used as one term within a general linear modeling framework to identify brain areas in which the time course of activity varied as a function of goal-processing events. Two distinct time courses of activation were identified, one in which activation varied parametrically with goal-processing operations, and the other in which activation became pronounced only during goal-processing intensive trials. Regions showing the parametric relationship comprised a frontoparietal system and include right dorsolateral prefrontal cortex [Brodmann's area (BA 9)], bilateral parietal (BA 40/7), and bilateral premotor (BA 6) areas. Regions preferentially engaged only during goal-intensive processing include left inferior frontal gyrus (BA 44). The implications of these results for the current model, as well as for our understanding of the neural mechanisms of planning and functional specialization of the prefrontal cortex, are discussed. PMID:11880658

  17. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    PubMed

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  18. Research on fatigue cracking growth parameters in asphaltic mixtures using computed tomography

    NASA Astrophysics Data System (ADS)

    Braz, D.; Lopes, R. T.; Motta, L. M. G.

    2004-01-01

    Distress of asphalt concrete pavement due to repeated bending from traffic loads has been a well-recognized problem in Brazil. If it is assumed that fatigue cracking growth is governed by the conditions at the crack tip, and that the crack tip conditions can be characterized by the stress intensity factor, then fatigue cracking growth as a function of stress intensity range Δ K can be determined. Computed tomography technique is used to detect crack evolution in asphaltic mixtures which were submitted to fatigue tests. Fatigue tests under conditions of controlled stress were carried out using diametral compression equipment and repeat loading. The aim of this work is imaging several specimens at different stages of the fatigue tests. In preliminary studies it was noted that the trajectory of a crack was influenced by the existence of voids in the originally unloaded specimens. Cracks would first be observed in the central region of a specimen, propagating in the direction of the extremities. Analyzing the graphics, that represent the fatigue cracking growth (d c/d N) as a function of stress intensity factor (Δ K), it is noticed that the curve has practically shown the same behavior for all specimens at the same level of the static tension rupture stress. The experimental values obtained for the constants A and n (of the Paris-Erdogan Law) present good agreement with the results obtained by Liang and Zhou.

  19. Reliable but Timesaving: In Search of an Efficient Quantum-chemical Method for the Description of Functional Fullerenes.

    PubMed

    Reis, H; Rasulev, B; Papadopoulos, M G; Leszczynski, J

    2015-01-01

    Fullerene and its derivatives are currently one of the most intensively investigated species in the area of nanomedicine and nanochemistry. Various unique properties of fullerenes are responsible for their wide range applications in industry, biology and medicine. A large pool of functionalized C60 and C70 fullerenes is investigated theoretically at different levels of quantum-mechanical theory. The semiempirial PM6 method, density functional theory with the B3LYP functional, and correlated ab initio MP2 method are employed to compute the optimized structures, and an array of properties for the considered species. In addition to the calculations for isolated molecules, the results of solution calculations are also reported at the DFT level, using the polarizable continuum model (PCM). Ionization potentials (IPs) and electron affinities (EAs) are computed by means of Koopmans' theorem as well as with the more accurate but computationally expensive ΔSCF method. Both procedures yield comparable values, while comparison of IPs and EAs computed with different quantum-mechanical methods shows surprisingly large differences. Harmonic vibrational frequencies are computed at the PM6 and B3LYP levels of theory and compared with each other. A possible application of the frequencies as 3D descriptors in the EVA (EigenVAlues) method is shown. All the computed data are made available, and may be used to replace experimental data in routine applications where large amounts of data are required, e.g. in structure-activity relationship studies of the toxicity of fullerene derivatives.

  20. Applications in Data-Intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.

    2010-04-01

    This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less

  1. Automating FEA programming

    NASA Technical Reports Server (NTRS)

    Sharma, Naveen

    1992-01-01

    In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.

  2. Magnetic skyrmion-based artificial neuron device

    NASA Astrophysics Data System (ADS)

    Li, Sai; Kang, Wang; Huang, Yangqi; Zhang, Xichao; Zhou, Yan; Zhao, Weisheng

    2017-08-01

    Neuromorphic computing, inspired by the biological nervous system, has attracted considerable attention. Intensive research has been conducted in this field for developing artificial synapses and neurons, attempting to mimic the behaviors of biological synapses and neurons, which are two basic elements of a human brain. Recently, magnetic skyrmions have been investigated as promising candidates in neuromorphic computing design owing to their topologically protected particle-like behaviors, nanoscale size and low driving current density. In one of our previous studies, a skyrmion-based artificial synapse was proposed, with which both short-term plasticity and long-term potentiation functions have been demonstrated. In this work, we further report on a skyrmion-based artificial neuron by exploiting the tunable current-driven skyrmion motion dynamics, mimicking the leaky-integrate-fire function of a biological neuron. With a simple single-device implementation, this proposed artificial neuron may enable us to build a dense and energy-efficient spiking neuromorphic computing system.

  3. [HYGIENIC REGULATION OF THE USE OF ELECTRONIC EDUCATIONAL RESOURCES IN THE MODERN SCHOOL].

    PubMed

    Stepanova, M I; Aleksandrova, I E; Sazanyuk, Z I; Voronova, B Z; Lashneva, L P; Shumkova, T V; Berezina, N O

    2015-01-01

    We studied the effect of academic studies with the use a notebook computer and interactive whiteboard on the functional state of an organism of schoolchildren. Using a complex of hygienic and physiological methods of the study we established that regulation of the computer activity of students must take into account not only duration but its intensity either. Design features of a notebook computer were shown both to impede keeping the optimal working posture in primary school children and increase the risk offormation of disorders of vision and musculoskeletal system. There were established the activating influence of the interactive whiteboard on performance activities and favorable dynamics of indices of the functional state of the organism of students under keeping optimal density of the academic study and the duration of its use. There are determined safety regulations of the work of schoolchildren with electronic resources in the educational process.

  4. Realistic simulated MRI and SPECT databases. Application to SPECT/MRI registration evaluation.

    PubMed

    Aubert-Broche, Berengere; Grova, Christophe; Reilhac, Anthonin; Evans, Alan C; Collins, D Louis

    2006-01-01

    This paper describes the construction of simulated SPECT and MRI databases that account for realistic anatomical and functional variability. The data is used as a gold-standard to evaluate four SPECT/MRI similarity-based registration methods. Simulation realism was accounted for using accurate physical models of data generation and acquisition. MRI and SPECT simulations were generated from three subjects to take into account inter-subject anatomical variability. Functional SPECT data were computed from six functional models of brain perfusion. Previous models of normal perfusion and ictal perfusion observed in Mesial Temporal Lobe Epilepsy (MTLE) were considered to generate functional variability. We studied the impact noise and intensity non-uniformity in MRI simulations and SPECT scatter correction may have on registration accuracy. We quantified the amount of registration error caused by anatomical and functional variability. Registration involving ictal data was less accurate than registration involving normal data. MR intensity nonuniformity was the main factor decreasing registration accuracy. The proposed simulated database is promising to evaluate many functional neuroimaging methods, involving MRI and SPECT data.

  5. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  6. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  7. Franck-Condon Factors for Diatomics: Insights and Analysis Using the Fourier Grid Hamiltonian Method

    ERIC Educational Resources Information Center

    Ghosh, Supriya; Dixit, Mayank Kumar; Bhattacharyya, S. P.; Tembe, B. L.

    2013-01-01

    Franck-Condon factors (FCFs) play a crucial role in determining the intensities of the vibrational bands in electronic transitions. In this article, a relatively simple method to calculate the FCFs is illustrated. An algorithm for the Fourier Grid Hamiltonian (FGH) method for computing the vibrational wave functions and the corresponding energy…

  8. An Evolutionary Algorithm for Fast Intensity Based Image Matching Between Optical and SAR Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias

    2018-04-01

    This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.

  9. Theoretical study of sum-frequency vibrational spectroscopy on limonene surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Ren-Hui, E-mail: zrh@iccas.ac.cn; Liu, Hao; Jing, Yuan-Yuan

    2014-03-14

    By combining molecule dynamics (MD) simulation and quantum chemistry computation, we calculate the surface sum-frequency vibrational spectroscopy (SFVS) of R-limonene molecules at the gas-liquid interface for SSP, PPP, and SPS polarization combinations. The distributions of the Euler angles are obtained using MD simulation, the ψ-distribution is between isotropic and Gaussian. Instead of the MD distributions, different analytical distributions such as the δ-function, Gaussian and isotropic distributions are applied to simulate surface SFVS. We find that different distributions significantly affect the absolute SFVS intensity and also influence on relative SFVS intensity, and the δ-function distribution should be used with caution whenmore » the orientation distribution is broad. Furthermore, the reason that the SPS signal is weak in reflected arrangement is discussed.« less

  10. Analysis of speckle and material properties in laider tracer

    NASA Astrophysics Data System (ADS)

    Ross, Jacob W.; Rigling, Brian D.; Watson, Edward A.

    2017-04-01

    The SAL simulation tool Laider Tracer models speckle: the random variation in intensity of an incident light beam across a rough surface. Within Laider Tracer, the speckle field is modeled as a 2-D array of jointly Gaussian random variables projected via ray tracing onto the scene of interest. Originally, all materials in Laider Tracer were treated as ideal diffuse scatterers, for which the far-field return computed uses the Lambertian Bidirectional Reflectance Distribution Function (BRDF). As presented here, we implement material properties into Laider Tracer via the Non-conventional Exploitation Factors Data System: a database of properties for thousands of different materials sampled at various wavelengths and incident angles. We verify the intensity behavior as a function of incident angle after material properties are added to the simulation.

  11. Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data

    NASA Technical Reports Server (NTRS)

    Kanekal, S. G.; Li, X.; Baker, D. N.; Selesnick, R. S.; Hoxie, V. C.

    2018-01-01

    An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 megaelectronvolts, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.

  12. Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data

    NASA Astrophysics Data System (ADS)

    Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.; Hoxie, V. C.; Li, X.

    2018-01-01

    An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 MeV, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.

  13. Thin film ferroelectric electro-optic memory

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita (Inventor); Thakoor, Anilkumar P. (Inventor)

    1993-01-01

    An electrically programmable, optically readable data or memory cell is configured from a thin film of ferroelectric material, such as PZT, sandwiched between a transparent top electrode and a bottom electrode. The output photoresponse, which may be a photocurrent or photo-emf, is a function of the product of the remanent polarization from a previously applied polarization voltage and the incident light intensity. The cell is useful for analog and digital data storage as well as opto-electric computing. The optical read operation is non-destructive of the remanent polarization. The cell provides a method for computing the product of stored data and incident optical data by applying an electrical signal to store data by polarizing the thin film ferroelectric material, and then applying an intensity modulated optical signal incident onto the thin film material to generate a photoresponse therein related to the product of the electrical and optical signals.

  14. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  15. X-ray beam-shaping via deformable mirrors: surface profile and point spread function computation for Gaussian beams using physical optics.

    PubMed

    Spiga, D

    2018-01-01

    X-ray mirrors with high focusing performances are commonly used in different sectors of science, such as X-ray astronomy, medical imaging and synchrotron/free-electron laser beamlines. While deformations of the mirror profile may cause degradation of the focus sharpness, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators. The resulting profile can be characterized with suitable metrology tools and correlated with the expected optical quality via a wavefront propagation code or, sometimes, predicted using geometric optics. In the latter case and for the special class of profile deformations with monotonically increasing derivative, i.e. concave upwards, the point spread function (PSF) can even be predicted analytically. Moreover, under these assumptions, the relation can also be reversed: from the desired PSF the required profile deformation can be computed analytically, avoiding the use of trial-and-error search codes. However, the computation has been so far limited to geometric optics, which entailed some limitations: for example, mirror diffraction effects and the size of the coherent X-ray source were not considered. In this paper, the beam-shaping formalism in the framework of physical optics is reviewed, in the limit of small light wavelengths and in the case of Gaussian intensity wavefronts. Some examples of shaped profiles are also shown, aiming at turning a Gaussian intensity distribution into a top-hat one, and checks of the shaping performances computing the at-wavelength PSF by means of the WISE code are made.

  16. Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Dong, Bin; Wen, Zaiwen

    2017-02-01

    In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.

  17. On the Value of Reptilian Brains to Map the Evolution of the Hippocampal Formation.

    PubMed

    Reiter, Sam; Liaw, Hua-Peng; Yamawaki, Tracy M; Naumann, Robert K; Laurent, Gilles

    2017-01-01

    Our ability to navigate through the world depends on the function of the hippocampus. This old cortical structure plays a critical role in spatial navigation in mammals and in a variety of processes, including declarative and episodic memory and social behavior. Intense research has revealed much about hippocampal anatomy, physiology, and computation; yet, even intensely studied phenomena such as the shaping of place cell activity or the function of hippocampal firing patterns during sleep remain incompletely understood. Interestingly, while the hippocampus may be a 'higher order' area linked to a complex cortical hierarchy in mammals, it is an old cortical structure in evolutionary terms. The reptilian cortex, structurally much simpler than the mammalian cortex and hippocampus, therefore presents a good alternative model for exploring hippocampal function. Here, we trace common patterns in the evolution of the hippocampus of reptiles and mammals and ask which parts can be profitably compared to understand functional principles. In addition, we describe a selection of the highly diverse repertoire of reptilian behaviors to illustrate the value of a comparative approach towards understanding hippocampal function. © 2017 S. Karger AG, Basel.

  18. Multiscale analysis of neural spike trains.

    PubMed

    Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin

    2014-01-30

    This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Neuro-genetic non-invasive temperature estimation: intensity and spatial prediction.

    PubMed

    Teixeira, César A; Ruano, M Graça; Ruano, António E; Pereira, Wagner C A

    2008-06-01

    The existence of proper non-invasive temperature estimators is an essential aspect when thermal therapy applications are envisaged. These estimators must be good predictors to enable temperature estimation at different operational situations, providing better control of the therapeutic instrumentation. In this work, radial basis functions artificial neural networks were constructed to access temperature evolution on an ultrasound insonated medium. The employed models were radial basis functions neural networks with external dynamics induced by their inputs. Both the most suited set of model inputs and number of neurons in the network were found using the multi-objective genetic algorithm. The neural models were validated in two situations: the operating ones, as used in the construction of the network; and in 11 unseen situations. The new data addressed two new spatial locations and a new intensity level, assessing the intensity and space prediction capacity of the proposed model. Good performance was obtained during the validation process both in terms of the spatial points considered and whenever the new intensity level was within the range of applied intensities. A maximum absolute error of 0.5 degrees C+/-10% (0.5 degrees C is the gold-standard threshold in hyperthermia/diathermia) was attained with low computationally complex models. The results confirm that the proposed neuro-genetic approach enables foreseeing temperature propagation, in connection to intensity and space parameters, thus enabling the assessment of different operating situations with proper temperature resolution.

  20. The reduced basis method for the electric field integral equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fares, M., E-mail: fares@cerfacs.f; Hesthaven, J.S., E-mail: Jan_Hesthaven@Brown.ed; Maday, Y., E-mail: maday@ann.jussieu.f

    We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, formore » many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.« less

  1. A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert

    1996-01-01

    The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.

  2. Spot quantification in two dimensional gel electrophoresis image analysis: comparison of different approaches and presentation of a novel compound fitting algorithm

    PubMed Central

    2014-01-01

    Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860

  3. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE PAGES

    Sosa Vazquez, Xochitl A.; Isborn, Christine M.

    2015-12-22

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. As a result, in vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  4. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sosa Vazquez, Xochitl A.; Isborn, Christine M., E-mail: cisborn@ucmerced.edu

    2015-12-28

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. In vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  5. Toward ab initio molecular dynamics modeling for sum-frequency generation spectra; an efficient algorithm based on surface-specific velocity-velocity correlation function.

    PubMed

    Ohto, Tatsuhiko; Usui, Kota; Hasegawa, Taisuke; Bonn, Mischa; Nagata, Yuki

    2015-09-28

    Interfacial water structures have been studied intensively by probing the O-H stretch mode of water molecules using sum-frequency generation (SFG) spectroscopy. This surface-specific technique is finding increasingly widespread use, and accordingly, computational approaches to calculate SFG spectra using molecular dynamics (MD) trajectories of interfacial water molecules have been developed and employed to correlate specific spectral signatures with distinct interfacial water structures. Such simulations typically require relatively long (several nanoseconds) MD trajectories to allow reliable calculation of the SFG response functions through the dipole moment-polarizability time correlation function. These long trajectories limit the use of computationally expensive MD techniques such as ab initio MD and centroid MD simulations. Here, we present an efficient algorithm determining the SFG response from the surface-specific velocity-velocity correlation function (ssVVCF). This ssVVCF formalism allows us to calculate SFG spectra using a MD trajectory of only ∼100 ps, resulting in the substantial reduction of the computational costs, by almost an order of magnitude. We demonstrate that the O-H stretch SFG spectra at the water-air interface calculated by using the ssVVCF formalism well reproduce those calculated by using the dipole moment-polarizability time correlation function. Furthermore, we applied this ssVVCF technique for computing the SFG spectra from the ab initio MD trajectories with various density functionals. We report that the SFG responses computed from both ab initio MD simulations and MD simulations with an ab initio based force field model do not show a positive feature in its imaginary component at 3100 cm(-1).

  6. Parallel computation of level set method for 500 Hz visual servo control

    NASA Astrophysics Data System (ADS)

    Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi

    2008-11-01

    We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.

  7. Accelerating NLTE radiative transfer by means of the Forth-and-Back Implicit Lambda Iteration: A two-level atom line formation in 2D Cartesian coordinates

    NASA Astrophysics Data System (ADS)

    Milić, Ivan; Atanacković, Olga

    2014-10-01

    State-of-the-art methods in multidimensional NLTE radiative transfer are based on the use of local approximate lambda operator within either Jacobi or Gauss-Seidel iterative schemes. Here we propose another approach to the solution of 2D NLTE RT problems, Forth-and-Back Implicit Lambda Iteration (FBILI), developed earlier for 1D geometry. In order to present the method and examine its convergence properties we use the well-known instance of the two-level atom line formation with complete frequency redistribution. In the formal solution of the RT equation we employ short characteristics with two-point algorithm. Using an implicit representation of the source function in the computation of the specific intensities, we compute and store the coefficients of the linear relations J=a+bS between the mean intensity J and the corresponding source function S. The use of iteration factors in the ‘local’ coefficients of these implicit relations in two ‘inward’ sweeps of 2D grid, along with the update of the source function in other two ‘outward’ sweeps leads to four times faster solution than the Jacobi’s one. Moreover, the update made in all four consecutive sweeps of the grid leads to an acceleration by a factor of 6-7 compared to the Jacobi iterative scheme.

  8. A study of model parameters associated with the urban climate using HCMM data. [analysis of St. Louis, Missouri infrared imagery

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Progress in the study of the intensity of the urban heat island is reported. The intensity of the heat island is commonly defined as the temperature difference between the center of the city and the surrounding suburban and rural regions. The intensity is considered as a function of changes in the season and changes in meteorological conditions in order to derive various parameters which may be used in numerical models for urban climate. Twelve case studies were selected and CCT's were ordered. In situ data was obtained from sixteen stations scattered about the city of St. Louis. Upper-air meteorological data were obtained and the water vapor and the temperature data were processed. Atmospheric transmissivities were computed for each of the case studies.

  9. Determining X-ray source intensity and confidence bounds in crowded fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu

    We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less

  10. Reinventing patient-centered computing for the twenty-first century.

    PubMed

    Goldberg, H S; Morales, A; Gottlieb, L; Meador, L; Safran, C

    2001-01-01

    Despite evidence over the past decade that patients like and will use patient-centered computing systems in managing their health, patients have remained forgotten stakeholders in advances in clinical computing systems. We present a framework for patient empowerment and the technical realization of that framework in an architecture called CareLink. In an evaluation of the initial deployment of CareLink in the support of neonatal intensive care, we have demonstrated a reduction in the length of stay for very-low birthweight infants, and an improvement in family satisfaction with care delivery. With the ubiquitous adoption of the Internet into the general culture, patient-centered computing provides the opportunity to mend broken health care relationships and reconnect patients to the care delivery process. CareLink itself provides functionality to support both clinical care and research, and provides a living laboratory for the further study of patient-centered computing.

  11. A Longitudinal Investigation of the Effects of Computer Anxiety on Performance in a Computing-Intensive Environment

    ERIC Educational Resources Information Center

    Buche, Mari W.; Davis, Larry R.; Vician, Chelley

    2007-01-01

    Computers are pervasive in business and education, and it would be easy to assume that all individuals embrace technology. However, evidence shows that roughly 30 to 40 percent of individuals experience some level of computer anxiety. Many academic programs involve computing-intensive courses, but the actual effects of this exposure on computer…

  12. Stress intensity factors of composite orthotropic plates containing periodic buffer strips

    NASA Technical Reports Server (NTRS)

    Delale, F.; Erdogan, F.

    1978-01-01

    The fracture problem of laminated plates which consist of bonded orthotropic layers is studied. The fields equations for an elastic orthotropic body are transformed to give the displacement and stress expressions for each layer or strip. The unknown functions in these expressions are found by satisfying the remaining boundary and continuity conditions. A system of singular integral equations is obtained from the mixed boundary conditions. The singular behavior around the crack tip and at the bimaterial interface is studied. The stress intensity factors are computed for various material combinations and various crack geometries. The results are discussed and are compared with those for isotropic materials.

  13. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  14. Fine-grained parallelization of fitness functions in bioinformatics optimization problems: gene selection for cancer classification and biclustering of gene expression data.

    PubMed

    Gomez-Pulido, Juan A; Cerrada-Barrios, Jose L; Trinidad-Amado, Sebastian; Lanza-Gutierrez, Jose M; Fernandez-Diaz, Ramon A; Crawford, Broderick; Soto, Ricardo

    2016-08-31

    Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.

  15. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  16. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  17. Introduction

    NASA Astrophysics Data System (ADS)

    Zhao, Ben; Garbacki, Paweł; Gkantsidis, Christos; Iamnitchi, Adriana; Voulgaris, Spyros

    After a decade of intensive investigation, peer-to-peer computing has established itself as an accepted research eld in the general area of distributed systems. Peer-to- peer computing can be seen as the democratization of computing over throwing traditional hierarchical designs favored in client-server systems largely brought about by last-mile network improvements which have made individual PCs rst-class citizens in the network community. Much of the early focus in peer-to-peer systems was on best-effort le sharing applications. In recent years, however, research has focused on peer-to-peer systems that provide operational properties and functionality similar to those shown by more traditional distributed systems. These properties include stronger consistency, reliability, and security guarantees suitable to supporting traditional applications such as databases.

  18. Cloud Computing Boosts Business Intelligence of Telecommunication Industry

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Gao, Dan; Deng, Chao; Luo, Zhiguo; Sun, Shaoling

    Business Intelligence becomes an attracting topic in today's data intensive applications, especially in telecommunication industry. Meanwhile, Cloud Computing providing IT supporting Infrastructure with excellent scalability, large scale storage, and high performance becomes an effective way to implement parallel data processing and data mining algorithms. BC-PDM (Big Cloud based Parallel Data Miner) is a new MapReduce based parallel data mining platform developed by CMRI (China Mobile Research Institute) to fit the urgent requirements of business intelligence in telecommunication industry. In this paper, the architecture, functionality and performance of BC-PDM are presented, together with the experimental evaluation and case studies of its applications. The evaluation result demonstrates both the usability and the cost-effectiveness of Cloud Computing based Business Intelligence system in applications of telecommunication industry.

  19. Automating software design system DESTA

    NASA Technical Reports Server (NTRS)

    Lovitsky, Vladimir A.; Pearce, Patricia D.

    1992-01-01

    'DESTA' is the acronym for the Dialogue Evolutionary Synthesizer of Turnkey Algorithms by means of a natural language (Russian or English) functional specification of algorithms or software being developed. DESTA represents the computer-aided and/or automatic artificial intelligence 'forgiving' system which provides users with software tools support for algorithm and/or structured program development. The DESTA system is intended to provide support for the higher levels and earlier stages of engineering design of software in contrast to conventional Computer Aided Design (CAD) systems which provide low level tools for use at a stage when the major planning and structuring decisions have already been taken. DESTA is a knowledge-intensive system. The main features of the knowledge are procedures, functions, modules, operating system commands, batch files, their natural language specifications, and their interlinks. The specific domain for the DESTA system is a high level programming language like Turbo Pascal 6.0. The DESTA system is operational and runs on an IBM PC computer.

  20. Raman spectral evidence of methyl rotation in liquid toluene.

    PubMed

    Kapitán, Josef; Hecht, Lutz; Bour, Petr

    2008-02-21

    In order to rationalize subtle details in the liquid phase toluene Raman backscattering spectra, an analysis was performed based on a quantum-mechanical Hamiltonian operator comprising rotation of the methyl group and the angular dependence of vibrational frequencies and polarizability derivatives. The separation of the methyl torsion from the other vibrational motions appears to be necessary in order to explain relative intensity ratios of several bands and an anomalous broadening of spectral intensity observed at 1440 cm(-1). These results suggest that the CH3 group in the liquid phase rotates almost freely, similarly as in the gaseous phase, and that the molecule consequently exhibits effectively C(2v) point group symmetry. A classical description and an adiabatic separation of the methyl rotation from other molecular motion previously used in peptide models is not applicable to toluene because of a strong coupling with other vibrational motions. Density functional computations, particularly the BPW91 functional, provide reasonable estimates of harmonic frequencies and spectral intensities, as well as qualitatively correct fourth-order anharmonic corrections to the vibrational potential.

  1. Data intensive computing at Sandia.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew T.

    2010-09-01

    Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.

  2. CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction

    PubMed Central

    Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.

    2012-01-01

    Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638

  3. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    NASA Astrophysics Data System (ADS)

    Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.

    2012-02-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  4. Knowing when to give up: early-rejection stratagems in ligand docking

    NASA Astrophysics Data System (ADS)

    Skone, Gwyn; Voiculescu, Irina; Cameron, Stephen

    2009-10-01

    Virtual screening is an important resource in the drug discovery community, of which protein-ligand docking is a significant part. Much software has been developed for this purpose, largely by biochemists and those in related disciplines, who pursue ever more accurate representations of molecular interactions. The resulting tools, however, are very processor-intensive. This paper describes some initial results from a project to review computational chemistry techniques for docking from a non-chemistry standpoint. An abstract blueprint for protein-ligand docking using empirical scoring functions is suggested, and this is used to discuss potential improvements. By introducing computer science tactics such as lazy function evaluation, dramatic increases to throughput can and have been realized using a real-world docking program. Naturally, they can be extended to any system that approximately corresponds to the architecture outlined.

  5. An efficient formulation and implementation of the analytic energy gradient method to the single and double excitation coupled-cluster wave function - Application to Cl2O2

    NASA Technical Reports Server (NTRS)

    Rendell, Alistair P.; Lee, Timothy J.

    1991-01-01

    The analytic energy gradient for the single and double excitation coupled-cluster (CCSD) wave function has been reformulated and implemented in a new set of programs. The reformulated set of gradient equations have a smaller computational cost than any previously published. The iterative solution of the linear equations and the construction of the effective density matrices are fully vectorized, being based on matrix multiplications. The new method has been used to investigate the Cl2O2 molecule, which has recently been postulated as an important intermediate in the destruction of ozone in the stratosphere. In addition to reporting computational timings, the CCSD equilibrium geometries, harmonic vibrational frequencies, infrared intensities, and relative energetics of three isomers of Cl2O2 are presented.

  6. The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.

    2012-04-01

    Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.

  7. Accuracy of intensity and inclinometer output of three activity monitors for identification of sedentary behavior and light-intensity activity.

    PubMed

    Carr, Lucas J; Mahar, Matthew T

    2012-01-01

    Purpose. To examine the accuracy of intensity and inclinometer output of three physical activity monitors during various sedentary and light-intensity activities. Methods. Thirty-six participants wore three physical activity monitors (ActiGraph GT1M, ActiGraph GT3X+, and StepWatch) while completing sedentary (lying, sitting watching television, sitting using computer, and standing still) light (walking 1.0 mph, pedaling 7.0 mph, pedaling 15.0 mph) intensity activities under controlled settings. Accuracy for correctly categorizing intensity was assessed for each monitor and threshold. Accuracy of the GT3X+ inclinometer function (GT3X+Incl) for correctly identifying anatomical position was also assessed. Percentage agreement between direct observation and the monitor recorded time spent in sedentary behavior and light intensity was examined. Results. All monitors using all thresholds accurately identified over 80% of sedentary behaviors and 60% of light-intensity walking time based on intensity output. The StepWatch was the most accurate in detecting pedaling time but unable to detect pedal workload. The GT3X+Incl accurately identified anatomical position during 70% of all activities but demonstrated limitations in discriminating between activities of differing intensity. Conclusions. Our findings suggest that all three monitors accurately measure most sedentary and light-intensity activities although choice of monitors should be based on study-specific needs.

  8. Exact Synthesis of Reversible Circuits Using A* Algorithm

    NASA Astrophysics Data System (ADS)

    Datta, K.; Rathi, G. K.; Sengupta, I.; Rahaman, H.

    2015-06-01

    With the growing emphasis on low-power design methodologies, and the result that theoretical zero power dissipation is possible only if computations are information lossless, design and synthesis of reversible logic circuits have become very important in recent years. Reversible logic circuits are also important in the context of quantum computing, where the basic operations are reversible in nature. Several synthesis methodologies for reversible circuits have been reported. Some of these methods are termed as exact, where the motivation is to get the minimum-gate realization for a given reversible function. These methods are computationally very intensive, and are able to synthesize only very small functions. There are other methods based on function transformations or higher-level representation of functions like binary decision diagrams or exclusive-or sum-of-products, that are able to handle much larger circuits without any guarantee of optimality or near-optimality. Design of exact synthesis algorithms is interesting in this context, because they set some kind of benchmarks against which other methods can be compared. This paper proposes an exact synthesis approach based on an iterative deepening version of the A* algorithm using the multiple-control Toffoli gate library. Experimental results are presented with comparisons with other exact and some heuristic based synthesis approaches.

  9. Differential network analysis reveals the genome-wide landscape of estrogen receptor modulation in hormonal cancers

    PubMed Central

    Hsiao, Tzu-Hung; Chiu, Yu-Chiao; Hsu, Pei-Yin; Lu, Tzu-Pin; Lai, Liang-Chuan; Tsai, Mong-Hsun; Huang, Tim H.-M.; Chuang, Eric Y.; Chen, Yidong

    2016-01-01

    Several mutual information (MI)-based algorithms have been developed to identify dynamic gene-gene and function-function interactions governed by key modulators (genes, proteins, etc.). Due to intensive computation, however, these methods rely heavily on prior knowledge and are limited in genome-wide analysis. We present the modulated gene/gene set interaction (MAGIC) analysis to systematically identify genome-wide modulation of interaction networks. Based on a novel statistical test employing conjugate Fisher transformations of correlation coefficients, MAGIC features fast computation and adaption to variations of clinical cohorts. In simulated datasets MAGIC achieved greatly improved computation efficiency and overall superior performance than the MI-based method. We applied MAGIC to construct the estrogen receptor (ER) modulated gene and gene set (representing biological function) interaction networks in breast cancer. Several novel interaction hubs and functional interactions were discovered. ER+ dependent interaction between TGFβ and NFκB was further shown to be associated with patient survival. The findings were verified in independent datasets. Using MAGIC, we also assessed the essential roles of ER modulation in another hormonal cancer, ovarian cancer. Overall, MAGIC is a systematic framework for comprehensively identifying and constructing the modulated interaction networks in a whole-genome landscape. MATLAB implementation of MAGIC is available for academic uses at https://github.com/chiuyc/MAGIC. PMID:26972162

  10. Fast data preprocessing with Graphics Processing Units for inverse problem solving in light-scattering measurements

    NASA Astrophysics Data System (ADS)

    Derkachov, G.; Jakubczyk, T.; Jakubczyk, D.; Archer, J.; Woźniak, M.

    2017-07-01

    Utilising Compute Unified Device Architecture (CUDA) platform for Graphics Processing Units (GPUs) enables significant reduction of computation time at a moderate cost, by means of parallel computing. In the paper [Jakubczyk et al., Opto-Electron. Rev., 2016] we reported using GPU for Mie scattering inverse problem solving (up to 800-fold speed-up). Here we report the development of two subroutines utilising GPU at data preprocessing stages for the inversion procedure: (i) A subroutine, based on ray tracing, for finding spherical aberration correction function. (ii) A subroutine performing the conversion of an image to a 1D distribution of light intensity versus azimuth angle (i.e. scattering diagram), fed from a movie-reading CPU subroutine running in parallel. All subroutines are incorporated in PikeReader application, which we make available on GitHub repository. PikeReader returns a sequence of intensity distributions versus a common azimuth angle vector, corresponding to the recorded movie. We obtained an overall ∼ 400 -fold speed-up of calculations at data preprocessing stages using CUDA codes running on GPU in comparison to single thread MATLAB-only code running on CPU.

  11. On edge-aware path-based color spatial sampling for Retinex: from Termite Retinex to Light Energy-driven Termite Retinex

    NASA Astrophysics Data System (ADS)

    Simone, Gabriele; Cordone, Roberto; Serapioni, Raul Paolo; Lecca, Michela

    2017-05-01

    Retinex theory estimates the human color sensation at any observed point by correcting its color based on the spatial arrangement of the colors in proximate regions. We revise two recent path-based, edge-aware Retinex implementations: Termite Retinex (TR) and Energy-driven Termite Retinex (ETR). As the original Retinex implementation, TR and ETR scan the neighborhood of any image pixel by paths and rescale their chromatic intensities by intensity levels computed by reworking the colors of the pixels on the paths. Our interest in TR and ETR is due to their unique, content-based scanning scheme, which uses the image edges to define the paths and exploits a swarm intelligence model for guiding the spatial exploration of the image. The exploration scheme of ETR has been showed to be particularly effective: its paths are local minima of an energy functional, designed to favor the sampling of image pixels highly relevant to color sensation. Nevertheless, since its computational complexity makes ETR poorly practicable, here we present a light version of it, named Light Energy-driven TR, and obtained from ETR by implementing a modified, optimized minimization procedure and by exploiting parallel computing.

  12. Evaluation of bed load transport subject to high shear stress fluctuations

    NASA Astrophysics Data System (ADS)

    Cheng, Nian-Sheng; Tang, Hongwu; Zhu, Lijun

    2004-05-01

    Many formulas available in the literature for computing sediment transport rates are often expressed in terms of time mean variables such as time mean bed shear stress or flow velocity, while effects of turbulence intensity, e.g., bed shear stress fluctuation, on sediment transport were seldom considered. This may be due to the fact that turbulence fluctuation is relatively limited in laboratory open-channel flows, which are often used for conducting sediment transport experiments. However, turbulence intensity could be markedly enhanced in practice. This note presents an analytical method to compute bed load transport by including effects of fluctuations in the bed shear stress. The analytical results obtained show that the transport rate enhanced by turbulence can be expressed as a simple function of the relative fluctuation of the bed shear stress. The results are also verified using data that were collected recently from specifically designed laboratory experiments. The present analysis is applicable largely for the condition of a flat bed that is comprised of uniform sand particles subject to unidirectional flows.

  13. An associative capacitive network based on nanoscale complementary resistive switches for memory-intensive computing

    NASA Astrophysics Data System (ADS)

    Kavehei, Omid; Linn, Eike; Nielen, Lutz; Tappertzhofen, Stefan; Skafidas, Efstratios; Valov, Ilia; Waser, Rainer

    2013-05-01

    We report on the implementation of an Associative Capacitive Network (ACN) based on the nondestructive capacitive readout of two Complementary Resistive Switches (2-CRSs). ACNs are capable of performing a fully parallel search for Hamming distances (i.e. similarity) between input and stored templates. Unlike conventional associative memories where charge retention is a key function and hence, they require frequent refresh cycles, in ACNs, information is retained in a nonvolatile resistive state and normal tasks are carried out through capacitive coupling between input and output nodes. Each device consists of two CRS cells and no selective element is needed, therefore, CMOS circuitry is only required in the periphery, for addressing and read-out. Highly parallel processing, nonvolatility, wide interconnectivity and low-energy consumption are significant advantages of ACNs over conventional and emerging associative memories. These characteristics make ACNs one of the promising candidates for applications in memory-intensive and cognitive computing, switches and routers as binary and ternary Content Addressable Memories (CAMs) and intelligent data processing.

  14. Supercomputer applications in molecular modeling.

    PubMed

    Gund, T M

    1988-01-01

    An overview of the functions performed by molecular modeling is given. Molecular modeling techniques benefiting from supercomputing are described, namely, conformation, search, deriving bioactive conformations, pharmacophoric pattern searching, receptor mapping, and electrostatic properties. The use of supercomputers for problems that are computationally intensive, such as protein structure prediction, protein dynamics and reactivity, protein conformations, and energetics of binding is also examined. The current status of supercomputing and supercomputer resources are discussed.

  15. Displays. [three dimensional analog visual system for aiding pilot space perception

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An experimental investigation made to determine the depth cue of a head movement perspective and image intensity as a function of depth is summarized. The experiment was based on the use of a hybrid computer generated contact analog visual display in which various perceptual depth cues are included on a two dimensional CRT screen. The system's purpose was to impart information, in an integrated and visually compelling fashion, about the vehicle's position and orientation in space. Results show head movement gives a 40% improvement in depth discrimination when the display is between 40 and 100 cm from the subject; intensity variation resulted in as much improvement as head movement.

  16. The MIT/OSO 7 catalog of X-ray sources - Intensities, spectra, and long-term variability

    NASA Technical Reports Server (NTRS)

    Markert, T. H.; Laird, F. N.; Clark, G. W.; Hearn, D. R.; Sprott, G. F.; Li, F. K.; Bradt, H. V.; Lewin, W. H. G.; Schnopper, H. W.; Winkler, P. F.

    1979-01-01

    This paper is a summary of the observations of the cosmic X-ray sky performed by the MIT 1-40-keV X-ray detectors on OSO 7 between October 1971 and May 1973. Specifically, mean intensities or upper limits of all third Uhuru or OSO 7 cataloged sources (185 sources) in the 3-10-keV range are computed. For those sources for which a statistically significant (greater than 20) intensity was found in the 3-10-keV band (138 sources), further intensity determinations were made in the 1-15-keV, 1-6-keV, and 15-40-keV energy bands. Graphs and other simple techniques are provided to aid the user in converting the observed counting rates to convenient units and in determining spectral parameters. Long-term light curves (counting rates in one or more energy bands as a function of time) are plotted for 86 of the brighter sources.

  17. Localization of intense electromagnetic waves in plasmas.

    PubMed

    Shukla, Padma Kant; Eliasson, Bengt

    2008-05-28

    We present theoretical and numerical studies of the interaction between relativistically intense laser light and a two-temperature plasma consisting of one relativistically hot and one cold component of electrons. Such plasmas are frequently encountered in intense laser-plasma experiments where collisionless heating via Raman instabilities leads to a high-energetic tail in the electron distribution function. The electromagnetic waves (EMWs) are governed by the Maxwell equations, and the plasma is governed by the relativistic Vlasov and hydrodynamic equations. Owing to the interaction between the laser light and the plasma, we can have trapping of electrons in the intense wakefield of the laser pulse and the formation of relativistic electron holes (REHs) in which laser light is trapped. Such electron holes are characterized by a non-Maxwellian distribution of electrons where we have trapped and free electron populations. We present a model for the interaction between laser light and REHs, and computer simulations that show the stability and dynamics of the coupled electron hole and EMW envelopes.

  18. Is the perception of 3D shape from shading based on assumed reflectance and illumination?

    PubMed

    Todd, James T; Egan, Eric J L; Phillips, Flip

    2014-01-01

    The research described in the present article was designed to compare three types of image shading: one generated with a Lambertian BRDF and homogeneous illumination such that image intensity was determined entirely by local surface orientation irrespective of position; one that was textured with a linear intensity gradient, such that image intensity was determined entirely by local surface position irrespective of orientation; and another that was generated with a Lambertian BRDF and inhomogeneous illumination such that image intensity was influenced by both position and orientation. A gauge figure adjustment task was used to measure observers' perceptions of local surface orientation on the depicted surfaces, and the probe points included 60 pairs of regions that both had the same orientation. The results show clearly that observers' perceptions of these three types of stimuli were remarkably similar, and that probe regions with similar apparent orientations could have large differences in image intensity. This latter finding is incompatible with any process for computing shape from shading that assumes any plausible reflectance function combined with any possible homogeneous illumination.

  19. Is the perception of 3D shape from shading based on assumed reflectance and illumination?

    PubMed Central

    Todd, James T.; Egan, Eric J. L.; Phillips, Flip

    2014-01-01

    The research described in the present article was designed to compare three types of image shading: one generated with a Lambertian BRDF and homogeneous illumination such that image intensity was determined entirely by local surface orientation irrespective of position; one that was textured with a linear intensity gradient, such that image intensity was determined entirely by local surface position irrespective of orientation; and another that was generated with a Lambertian BRDF and inhomogeneous illumination such that image intensity was influenced by both position and orientation. A gauge figure adjustment task was used to measure observers' perceptions of local surface orientation on the depicted surfaces, and the probe points included 60 pairs of regions that both had the same orientation. The results show clearly that observers' perceptions of these three types of stimuli were remarkably similar, and that probe regions with similar apparent orientations could have large differences in image intensity. This latter finding is incompatible with any process for computing shape from shading that assumes any plausible reflectance function combined with any possible homogeneous illumination. PMID:26034561

  20. Energy 101: Energy Efficient Data Centers

    ScienceCinema

    None

    2018-04-16

    Data centers provide mission-critical computing functions vital to the daily operation of top U.S. economic, scientific, and technological organizations. These data centers consume large amounts of energy to run and maintain their computer systems, servers, and associated high-performance components—up to 3% of all U.S. electricity powers data centers. And as more information comes online, data centers will consume even more energy. Data centers can become more energy efficient by incorporating features like power-saving "stand-by" modes, energy monitoring software, and efficient cooling systems instead of energy-intensive air conditioners. These and other efficiency improvements to data centers can produce significant energy savings, reduce the load on the electric grid, and help protect the nation by increasing the reliability of critical computer operations.

  1. The Unparalleled Systems Engineering of MSL's Backup Entry, Descent, and Landing System: Second Chance

    NASA Technical Reports Server (NTRS)

    Roumeliotis, Chris; Grinblat, Jonathan; Reeves, Glenn

    2013-01-01

    Second Chance (SECC) was a bare bones version of Mars Science Laboratory's (MSL) Entry Descent & Landing (EDL) flight software that ran on Curiosity's backup computer, which could have taken over swiftly in the event of a reset of Curiosity's prime computer, in order to land her safely on Mars. Without SECC, a reset of Curiosity's prime computer would have lead to catastrophic mission failure. Even though a reset of the prime computer never occurred, SECC had the important responsibility as EDL's guardian angel, and this responsibility would not have seen such success without unparalleled systems engineering. This paper will focus on the systems engineering behind SECC: Covering a brief overview of SECC's design, the intense schedule to use SECC as a backup system, the verification and validation of the system's "Do No Harm" mandate, the system's overall functional performance, and finally, its use on the fateful day of August 5th, 2012.

  2. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  3. Accelerated computer generated holography using sparse bases in the STFT domain.

    PubMed

    Blinder, David; Schelkens, Peter

    2018-01-22

    Computer-generated holography at high resolutions is a computationally intensive task. Efficient algorithms are needed to generate holograms at acceptable speeds, especially for real-time and interactive applications such as holographic displays. We propose a novel technique to generate holograms using a sparse basis representation in the short-time Fourier space combined with a wavefront-recording plane placed in the middle of the 3D object. By computing the point spread functions in the transform domain, we update only a small subset of the precomputed largest-magnitude coefficients to significantly accelerate the algorithm over conventional look-up table methods. We implement the algorithm on a GPU, and report a speedup factor of over 30. We show that this transform is superior over wavelet-based approaches, and show quantitative and qualitative improvements over the state-of-the-art WASABI method; we report accuracy gains of 2dB PSNR, as well improved view preservation.

  4. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  5. Introduction to bioinformatics.

    PubMed

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  6. Fermilab computing at the Intensity Frontier

    DOE PAGES

    Group, Craig; Fuess, S.; Gutsche, O.; ...

    2015-12-23

    The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less

  7. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  8. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  9. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  10. A New Approach to Integrate GPU-based Monte Carlo Simulation into Inverse Treatment Plan Optimization for Proton Therapy

    PubMed Central

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2016-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456

  11. Intensity Conserving Spectral Fitting

    NASA Technical Reports Server (NTRS)

    Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.

    2015-01-01

    The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.

  12. Maximum a posteriori classification of multifrequency, multilook, synthetic aperture radar intensity data

    NASA Technical Reports Server (NTRS)

    Rignot, E.; Chellappa, R.

    1993-01-01

    We present a maximum a posteriori (MAP) classifier for classifying multifrequency, multilook, single polarization SAR intensity data into regions or ensembles of pixels of homogeneous and similar radar backscatter characteristics. A model for the prior joint distribution of the multifrequency SAR intensity data is combined with a Markov random field for representing the interactions between region labels to obtain an expression for the posterior distribution of the region labels given the multifrequency SAR observations. The maximization of the posterior distribution yields Bayes's optimum region labeling or classification of the SAR data or its MAP estimate. The performance of the MAP classifier is evaluated by using computer-simulated multilook SAR intensity data as a function of the parameters in the classification process. Multilook SAR intensity data are shown to yield higher classification accuracies than one-look SAR complex amplitude data. The MAP classifier is extended to the case in which the radar backscatter from the remotely sensed surface varies within the SAR image because of incidence angle effects. The results obtained illustrate the practicality of the method for combining SAR intensity observations acquired at two different frequencies and for improving classification accuracy of SAR data.

  13. Do Hassles and Uplifts Change with Age? Longitudinal Findings from the VA Normative Aging Study

    PubMed Central

    Aldwin, Carolyn M.; Jeong, Yu-Jin; Igarashi, Heidi; Spiro, Avron

    2014-01-01

    To examine emotion regulation in later life, we contrasted the modified hedonic treadmill theory with developmental theories, using hassles and uplifts to assess emotion regulation in context. The sample was 1,315 men from the VA Normative Aging Study aged 53 to 85 years, who completed 3,894 observations between 1989 and 2004. We computed three scores for both hassles and uplifts: intensity (ratings reflecting appraisal processes), exposure (count), and summary (total) scores. Growth curves over age showed marked differences in trajectory patterns for intensity and exposure scores. Although exposure to hassles and uplifts decreased in later life, intensity scores increased. Growth based modelling showed individual differences in patterns of hassles and uplifts intensity and exposure, with relative stability in uplifts intensity, normative non-linear changes in hassles intensity, and complex patterns of individual differences in exposure for both hassles and uplifts. Analyses with the summary scores showed that emotion regulation in later life is a function of both developmental change and contextual exposure, with different patterns emerging for hassles and uplifts. Thus, support was found for both hedonic treadmill and developmental change theories, reflecting different aspects of emotion regulation in late life. PMID:24660796

  14. Do hassles and uplifts change with age? Longitudinal findings from the VA normative aging study.

    PubMed

    Aldwin, Carolyn M; Jeong, Yu-Jin; Igarashi, Heidi; Spiro, Avron

    2014-03-01

    To examine emotion regulation in later life, we contrasted the modified hedonic treadmill theory with developmental theories, using hassles and uplifts to assess emotion regulation in context. The sample was 1,315 men from the VA Normative Aging Study aged 53 to 85 years, who completed 3,894 observations between 1989 and 2004. We computed 3 scores for both hassles and uplifts: intensity (ratings reflecting appraisal processes), exposure (count), and summary (total) scores. Growth curves over age showed marked differences in trajectory patterns for intensity and exposure scores. Although exposure to hassles and uplifts decreased in later life, intensity scores increased. Group-based modeling showed individual differences in patterns of hassles and uplifts intensity and exposure, with relative stability in uplifts intensity, normative nonlinear changes in hassles intensity, and complex patterns of individual differences in exposure for both hassles and uplifts. Analyses with the summary scores showed that emotion regulation in later life is a function of both developmental change and contextual exposure, with different patterns emerging for hassles and uplifts. Thus, support was found for both hedonic treadmill and developmental change theories, reflecting different aspects of emotion regulation in late life. (c) 2014 APA, all rights reserved.

  15. Lithographic image simulation for the 21st century with 19th-century tools

    NASA Astrophysics Data System (ADS)

    Gordon, Ronald L.; Rosenbluth, Alan E.

    2004-01-01

    Simulation of lithographic processes in semiconductor manufacturing has gone from a crude learning tool 20 years ago to a critical part of yield enhancement strategy today. Although many disparate models, championed by equally disparate communities, exist to describe various photoresist development phenomena, these communities would all agree that the one piece of the simulation picture that can, and must, be computed accurately is the image intensity in the photoresist. The imaging of a photomask onto a thin-film stack is one of the only phenomena in the lithographic process that is described fully by well-known, definitive physical laws. Although many approximations are made in the derivation of the Fourier transform relations between the mask object, the pupil, and the image, these and their impacts are well-understood and need little further investigation. The imaging process in optical lithography is modeled as a partially-coherent, Kohler illumination system. As Hopkins has shown, we can separate the computation into 2 pieces: one that takes information about the illumination source, the projection lens pupil, the resist stack, and the mask size or pitch, and the other that only needs the details of the mask structure. As the latter piece of the calculation can be expressed as a fast Fourier transform, it is the first piece that dominates. This piece involves computation of a potentially large number of numbers called Transmission Cross-Coefficients (TCCs), which are correlations of the pupil function weighted with the illumination intensity distribution. The advantage of performing the image calculations this way is that the computation of these TCCs represents an up-front cost, not to be repeated if one is only interested in changing the mask features, which is the case in Model-Based Optical Proximity Correction (MBOPC). The down side, however, is that the number of these expensive double integrals that must be performed increases as the square of the mask unit cell area; this number can cause even the fastest computers to balk if one needs to study medium- or long-range effects. One can reduce this computational burden by approximating with a smaller area, but accuracy is usually a concern, especially when building a model that will purportedly represent a manufacturing process. This work will review the current methodologies used to simulate the intensity distribution in air above the resist and address the above problems. More to the point, a methodology has been developed to eliminate the expensive numerical integrations in the TCC calculations, as the resulting integrals in many cases of interest can be either evaluated analytically, or replaced by analytical functions accurate to within machine precision. With the burden of computing these numbers lightened, more accurate representations of the image field can be realized, and better overall models are then possible.

  16. Utilizing lung sounds analysis for the evaluation of acute asthma in small children.

    PubMed

    Tinkelman, D G; Lutz, C; Conner, B

    1991-09-01

    One of the most difficult aspects of management of acute asthma in the small child is the clinician's inability to quantitate the response or lack of response to bronchodilator agents because of the inability of a child this age to perform objective lung measurements in the acute state. The present study was designed to evaluate bronchodilator responsiveness in children between 2 and 6 years of age with wheezing by means of a computerized lung sound analysis, computer digitized airway phonopneumonography. Children between ages 2 and 6 who were experiencing acute exacerbations of asthma were included in this study population. The 43 children were evaluated by physical examination, pulmonary function testing, if possible, by use of (spirometry or peak flow meter) and transmission of lung sounds to a computer using an electronic stethoscope to obtain a phonopneumograph with sound intensity level determinations during tidal breathing. A control group of 20 known asthmatic patients between the ages of 8 and 52 years who also presented to the office with acute asthma were evaluated similarly. In each of these individuals, a physical examination was followed by complete spirometry as well as computer digitized airway phonopneumonography recordings. Following initial measurements, all patients were treated with nebulized albuterol (0.25 mL in 2 mL of saline). Five minutes after completion of the nebulization all patients were reexamined and repeat pulmonary function tests were performed followed by CDAP recordings. In the study group of children, the mean pretreatment sound intensity level was 1,694 (range 557 to 4,950 SD +/- 745).(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  18. GeNemo: a search engine for web-based functional genomic data.

    PubMed

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-08

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. On the Relevancy of Efficient, Integrated Computer and Network Monitoring in HEP Distributed Online Environment

    NASA Astrophysics Data System (ADS)

    Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.

    Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.

  20. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.

    PubMed

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2014-12-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  1. Hierarchical MFMO Circuit Modules for an Energy-Efficient SDR DBF

    NASA Astrophysics Data System (ADS)

    Mar, Jeich; Kuo, Chi-Cheng; Wu, Shin-Ru; Lin, You-Rong

    The hierarchical multi-function matrix operation (MFMO) circuit modules are designed using coordinate rotations digital computer (CORDIC) algorithm for realizing the intensive computation of matrix operations. The paper emphasizes that the designed hierarchical MFMO circuit modules can be used to develop a power-efficient software-defined radio (SDR) digital beamformer (DBF). The formulas of the processing time for the scalable MFMO circuit modules implemented in field programmable gate array (FPGA) are derived to allocate the proper logic resources for the hardware reconfiguration. The hierarchical MFMO circuit modules are scalable to the changing number of array branches employed for the SDR DBF to achieve the purpose of power saving. The efficient reuse of the common MFMO circuit modules in the SDR DBF can also lead to energy reduction. Finally, the power dissipation and reconfiguration function in the different modes of the SDR DBF are observed from the experiment results.

  2. The Role of Super-Atom Molecular Orbitals in Doped Fullerenes in a Femtosecond Intense Laser Field

    DOE PAGES

    Xiong, Hui; Mignolet, Benoit; Fang, Li; ...

    2017-03-09

    The interaction of gas phase endohedral fullerene Ho3N@C80 with intense (0.1–5 × 10 14 W/cm 2), short (30 fs), 800 nm laser pulses was investigated. The power law dependence of Ho 3N@C 80 q+, q = 1–2, was found to be different from that of C 60. Time-dependent density functional theory computations revealed different light-induced ionization mechanisms. Unlike in C 60, in doped fullerenes, the breaking of the cage spherical symmetry makes super atomic molecular orbital (SAMO) states optically active. Theoretical calculations suggest that the fast ionization of the SAMO states in Ho 3N@C 80 is responsible for the nmore » = 3 power law for singly charged parent molecules at intensities lower than 1.2 × 10 14 W/cm 2.« less

  3. Open-Loop HIRF Experiments Performed on a Fault Tolerant Flight Control Computer

    NASA Technical Reports Server (NTRS)

    Koppen, Daniel M.

    1997-01-01

    During the third quarter of 1996, the Closed-Loop Systems Laboratory was established at the NASA Langley Research Center (LaRC) to study the effects of High Intensity Radiated Fields on complex avionic systems and control system components. This new facility provided a link and expanded upon the existing capabilities of the High Intensity Radiated Fields Laboratory at LaRC that were constructed and certified during 1995-96. The scope of the Closed-Loop Systems Laboratory is to place highly integrated avionics instrumentation into a high intensity radiated field environment, interface the avionics to a real-time flight simulation that incorporates aircraft dynamics, engines, sensors, actuators and atmospheric turbulence, and collect, analyze, and model aircraft performance. This paper describes the layout and functionality of the Closed-Loop Systems Laboratory, and the open-loop calibration experiments that led up to the commencement of closed-loop real-time flight experiments.

  4. TD-CI simulation of the electronic optical response of molecules in intense fields II: comparison of DFT functionals and EOM-CCSD.

    PubMed

    Sonk, Jason A; Schlegel, H Bernhard

    2011-10-27

    Time-dependent configuration interaction (TD-CI) simulations can be used to simulate molecules in intense laser fields. TD-CI calculations use the excitation energies and transition dipoles calculated in the absence of a field. The EOM-CCSD method provides a good estimate of the field-free excited states but is rather expensive. Linear-response time-dependent density functional theory (TD-DFT) is an inexpensive alternative for computing the field-free excitation energies and transition dipoles needed for TD-CI simulations. Linear-response TD-DFT calculations were carried out with standard functionals (B3LYP, BH&HLYP, HSE2PBE (HSE03), BLYP, PBE, PW91, and TPSS) and long-range corrected functionals (LC-ωPBE, ωB97XD, CAM-B3LYP, LC-BLYP, LC-PBE, LC-PW91, and LC-TPSS). These calculations used the 6-31G(d,p) basis set augmented with three sets of diffuse sp functions on each heavy atom. Butadiene was employed as a test case, and 500 excited states were calculated with each functional. Standard functionals yield average excitation energies that are significantly lower than the EOM-CC, while long-range corrected functionals tend to produce average excitation energies slightly higher. Long-range corrected functionals also yield transition dipoles that are somewhat larger than EOM-CC on average. The TD-CI simulations were carried out with a three-cycle Gaussian pulse (ω = 0.06 au, 760 nm) with intensities up to 1.26 × 10(14) W cm(-2) directed along the vector connecting the end carbons. The nonlinear response as indicated by the residual populations of the excited states after the pulse is far too large with standard functionals, primarily because the excitation energies are too low. The LC-ωPBE, LC-PBE, LC-PW91, and LC-TPSS long-range corrected functionals produce responses comparable to EOM-CC.

  5. Probability mapping of scarred myocardium using texture and intensity features in CMR images

    PubMed Central

    2013-01-01

    Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280

  6. A level set method for cupping artifact correction in cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Shipeng; Li, Haibo; Ge, Qi

    2015-08-15

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts inmore » CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts.« less

  7. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    NASA Astrophysics Data System (ADS)

    McEwan, Alistair; van Schaik, André

    2003-12-01

    The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a) rate level functions for onset and steady-state response, (b) recovery after masking, (c) additivity, (d) two-component adaptation, (e) phase locking, (f) recovery of spontaneous activity, and (g) computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  8. Challenges and solutions for realistic room simulation

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    2002-05-01

    Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.

  9. Hypervelocity Impact Test Fragment Modeling: Modifications to the Fragment Rotation Analysis and Lightcurve Code

    NASA Technical Reports Server (NTRS)

    Gouge, Michael F.

    2011-01-01

    Hypervelocity impact tests on test satellites are performed by members of the orbital debris scientific community in order to understand and typify the on-orbit collision breakup process. By analysis of these test satellite fragments, the fragment size and mass distributions are derived and incorporated into various orbital debris models. These same fragments are currently being put to new use using emerging technologies. Digital models of these fragments are created using a laser scanner. A group of computer programs referred to as the Fragment Rotation Analysis and Lightcurve code uses these digital representations in a multitude of ways that describe, measure, and model on-orbit fragments and fragment behavior. The Dynamic Rotation subroutine generates all of the possible reflected intensities from a scanned fragment as if it were observed to rotate dynamically while in orbit about the Earth. This calls an additional subroutine that graphically displays the intensities and the resulting frequency of those intensities as a range of solar phase angles in a Probability Density Function plot. This document reports the additions and modifications to the subset of the Fragment Rotation Analysis and Lightcurve concerned with the Dynamic Rotation and Probability Density Function plotting subroutines.

  10. STATISTICAL PROPERTIES OF SOLAR ACTIVE REGIONS OBTAINED FROM AN AUTOMATIC DETECTION SYSTEM AND THE COMPUTATIONAL BIASES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Jie; Wang Yuming; Liu Yang, E-mail: jzhang7@gmu.ed

    We have developed a computational software system to automate the process of identifying solar active regions (ARs) and quantifying their physical properties based on high-resolution synoptic magnetograms constructed from Michelson Doppler Imager (MDI; on board the SOHO spacecraft) images from 1996 to 2008. The system, based on morphological analysis and intensity thresholding, has four functional modules: (1) intensity segmentation to obtain kernel pixels, (2) a morphological opening operation to erase small kernels, which effectively remove ephemeral regions and magnetic fragments in decayed ARs, (3) region growing to extend kernels to full AR size, and (4) the morphological closing operation tomore » merge/group regions with a small spatial gap. We calculate the basic physical parameters of the 1730 ARs identified by the auto system. The mean and maximum magnetic flux of individual ARs are 1.67 x 10{sup 22} Mx and 1.97 x 10{sup 23} Mx, while that per Carrington rotation are 1.83 x 10{sup 23} Mx and 6.96 x 10{sup 23} Mx, respectively. The frequency distributions of ARs with respect to both area size and magnetic flux follow a log-normal function. However, when we decrease the detection thresholds and thus increase the number of detected ARs, the frequency distribution largely follows a power-law function. We also find that the equatorward drifting motion of the AR bands with solar cycle can be described by a linear function superposed with intermittent reverse driftings. The average drifting speed over one solar cycle is 1{sup o}.83{+-}0{sup o}.04 yr{sup -1} or 0.708 {+-} 0.015 m s{sup -1}.« less

  11. Vision-Based UAV Flight Control and Obstacle Avoidance

    DTIC Science & Technology

    2006-01-01

    denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion

  12. From cosmos to connectomes: the evolution of data-intensive science.

    PubMed

    Burns, Randal; Vogelstein, Joshua T; Szalay, Alexander S

    2014-09-17

    The analysis of data requires computation: originally by hand and more recently by computers. Different models of computing are designed and optimized for different kinds of data. In data-intensive science, the scale and complexity of data exceeds the comfort zone of local data stores on scientific workstations. Thus, cloud computing emerges as the preeminent model, utilizing data centers and high-performance clusters, enabling remote users to access and query subsets of the data efficiently. We examine how data-intensive computational systems originally built for cosmology, the Sloan Digital Sky Survey (SDSS), are now being used in connectomics, at the Open Connectome Project. We list lessons learned and outline the top challenges we expect to face. Success in computational connectomics would drastically reduce the time between idea and discovery, as SDSS did in cosmology. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Micro-CT of rodents: state-of-the-art and future perspectives

    PubMed Central

    Clark, D. P.; Badea, C. T.

    2014-01-01

    Micron-scale computed tomography (micro-CT) is an essential tool for phenotyping and for elucidating diseases and their therapies. This work is focused on preclinical micro-CT imaging, reviewing relevant principles, technologies, and applications. Commonly, micro-CT provides high-resolution anatomic information, either on its own or in conjunction with lower-resolution functional imaging modalities such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). More recently, however, advanced applications of micro-CT produce functional information by translating clinical applications to model systems (e.g. measuring cardiac functional metrics) and by pioneering new ones (e.g. measuring tumor vascular permeability with nanoparticle contrast agents). The primary limitations of micro-CT imaging are the associated radiation dose and relatively poor soft tissue contrast. We review several image reconstruction strategies based on iterative, statistical, and gradient sparsity regularization, demonstrating that high image quality is achievable with low radiation dose given ever more powerful computational resources. We also review two contrast mechanisms under intense development. The first is spectral contrast for quantitative material discrimination in combination with passive or actively targeted nanoparticle contrast agents. The second is phase contrast which measures refraction in biological tissues for improved contrast and potentially reduced radiation dose relative to standard absorption imaging. These technological advancements promise to develop micro-CT into a commonplace, functional and even molecular imaging modality. PMID:24974176

  14. Sulfate radical oxidation of aromatic contaminants: a detailed assessment of density functional theory and high-level quantum chemical methods.

    PubMed

    Pari, Sangavi; Wang, Inger A; Liu, Haizhou; Wong, Bryan M

    2017-03-22

    Advanced oxidation processes that utilize highly oxidative radicals are widely used in water reuse treatment. In recent years, the application of sulfate radical (SO 4 ˙ - ) as a promising oxidant for water treatment has gained increasing attention. To understand the efficiency of SO 4 ˙ - in the degradation of organic contaminants in wastewater effluent, it is important to be able to predict the reaction kinetics of various SO 4 ˙ - -driven oxidation reactions. In this study, we utilize density functional theory (DFT) and high-level wavefunction-based methods (including computationally-intensive coupled cluster methods), to explore the activation energies of SO 4 ˙ - -driven oxidation reactions on a series of benzene-derived contaminants. These high-level calculations encompass a wide set of reactions including 110 forward/reverse reactions and 5 different computational methods in total. Based on the high-level coupled-cluster quantum calculations, we find that the popular M06-2X DFT functional is significantly more accurate for OH - additions than for SO 4 ˙ - reactions. Most importantly, we highlight some of the limitations and deficiencies of other computational methods, and we recommend the use of high-level quantum calculations to spot-check environmental chemistry reactions that may lie outside the training set of the M06-2X functional, particularly for water oxidation reactions that involve SO 4 ˙ - and other inorganic species.

  15. Edge analyzing properties of center/surround response functions in cybernetic vision

    NASA Technical Reports Server (NTRS)

    Jobson, D. J.

    1984-01-01

    The ability of center/surround response functions to make explicit high resolution spatial information in optical images was investigated by performing convolutions of two dimensional response functions and image intensity functions (mainly edges). The center/surround function was found to have the unique property of separating edge contrast from shape variations and of providing a direct basis for determining contrast and subsequently shape of edges in images. Computationally simple measures of contrast and shape were constructed for potential use in cybernetic vision systems. For one class of response functions these measures were found to be reasonably resilient for a range of scan direction and displacements of the response functions relative to shaped edges. A pathological range of scan directions was also defined and methods for detecting and handling these cases were developed. The relationship of these results to biological vision is discussed speculatively.

  16. Functional modeling of the human auditory brainstem response to broadband stimulationa)

    PubMed Central

    Verhulst, Sarah; Bharadwaj, Hari M.; Mehraei, Golbarg; Shera, Christopher A.; Shinn-Cunningham, Barbara G.

    2015-01-01

    Population responses such as the auditory brainstem response (ABR) are commonly used for hearing screening, but the relationship between single-unit physiology and scalp-recorded population responses are not well understood. Computational models that integrate physiologically realistic models of single-unit auditory-nerve (AN), cochlear nucleus (CN) and inferior colliculus (IC) cells with models of broadband peripheral excitation can be used to simulate ABRs and thereby link detailed knowledge of animal physiology to human applications. Existing functional ABR models fail to capture the empirically observed 1.2–2 ms ABR wave-V latency-vs-intensity decrease that is thought to arise from level-dependent changes in cochlear excitation and firing synchrony across different tonotopic sections. This paper proposes an approach where level-dependent cochlear excitation patterns, which reflect human cochlear filter tuning parameters, drive AN fibers to yield realistic level-dependent properties of the ABR wave-V. The number of free model parameters is minimal, producing a model in which various sources of hearing-impairment can easily be simulated on an individualized and frequency-dependent basis. The model fits latency-vs-intensity functions observed in human ABRs and otoacoustic emissions while maintaining rate-level and threshold characteristics of single-unit AN fibers. The simulations help to reveal which tonotopic regions dominate ABR waveform peaks at different stimulus intensities. PMID:26428802

  17. Satisfaction with Life in Orofacial Pain Disorders: Associations and Theoretical Implications.

    PubMed

    Boggero, Ian A; Rojas-Ramirez, Marcia V; de Leeuw, Reny; Carlson, Charles R

    2016-01-01

    To test if patients with masticatory myofascial pain, local myalgia, centrally mediated myalgia, disc displacement, capsulitis/synovitis, or continuous neuropathic pain differed in self-reported satisfaction with life. The study also tested if satisfaction with life was similarly predicted by measures of physical, emotional, and social functioning across disorders. Satisfaction with life, fatigue, affective distress, social support, and pain data were extracted from the medical records of 343 patients seeking treatment for chronic orofacial pain. Patients were grouped by primary diagnosis assigned following their initial appointment. Satisfaction with life was compared between disorders, with and without pain intensity entered as a covariate. Disorder-specific linear regression models using physical, emotional, and social predictors of satisfaction with life were computed. Patients with centrally mediated myalgia reported significantly lower satisfaction with life than did patients with any of the other five disorders. Inclusion of pain intensity as a covariate weakened but did not eliminate the effect. Satisfaction with life was predicted by measures of physical, emotional, and social functioning, but these associations were not consistent across disorders. Results suggest that reduced satisfaction with life in patients with centrally mediated myalgia is not due only to pain intensity. There may be other factors that predispose people to both reduced satisfaction with life and centrally mediated myalgia. Furthermore, the results suggest that satisfaction with life is differentially influenced by physical, emotional, and social functioning in different orofacial pain disorders.

  18. Associated relaxation time and the correlation function for a tumor cell growth system subjected to color noises

    NASA Astrophysics Data System (ADS)

    Wang, Can-Jun; Wei, Qun; Mei, Dong-Cheng

    2008-03-01

    The associated relaxation time T and the normalized correlation function C(s) for a tumor cell growth system subjected to color noises are investigated. Using the Novikov theorem and Fox approach, the steady probability distribution is obtained. Based on them, the expressions of T and C(s) are derived by means of projection operator method, in which the effects of the memory kernels of the correlation function are taken into account. Performing the numerical computations, it is found: (1) With the cross-correlation intensity |λ|, the additive noise intensity α and the multiplicative noise self-correlation time τ increasing, the tumor cell numbers can be restrained; And the cross-correlation time τ, the multiplicative noise intensity D can induce the tumor cell numbers increasing; However, the additive noise self-correlation time τ cannot affect the tumor cell numbers; The relaxation time T is a stochastic resonant phenomenon, and the distribution curves exhibit a single-maximum structure with D increasing. (2) The cross-correlation strength λ weakens the related activity between two states of the tumor cell numbers at different time, and enhances the stability of the tumor cell growth system in the steady state; On the contrast, τ and τ enhance the related activity between two states at different time; However, τ has no effect on the related activity between two states at different time.

  19. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  20. Proposal for nanoscale cascaded plasmonic majority gates for non-Boolean computation.

    PubMed

    Dutta, Sourav; Zografos, Odysseas; Gurunarayanan, Surya; Radu, Iuliana; Soree, Bart; Catthoor, Francky; Naeemi, Azad

    2017-12-19

    Surface-plasmon-polariton waves propagating at the interface between a metal and a dielectric, hold the key to future high-bandwidth, dense on-chip integrated logic circuits overcoming the diffraction limitation of photonics. While recent advances in plasmonic logic have witnessed the demonstration of basic and universal logic gates, these CMOS oriented digital logic gates cannot fully utilize the expressive power of this novel technology. Here, we aim at unraveling the true potential of plasmonics by exploiting an enhanced native functionality - the majority voter. Contrary to the state-of-the-art plasmonic logic devices, we use the phase of the wave instead of the intensity as the state or computational variable. We propose and demonstrate, via numerical simulations, a comprehensive scheme for building a nanoscale cascadable plasmonic majority logic gate along with a novel referencing scheme that can directly translate the information encoded in the amplitude and phase of the wave into electric field intensity at the output. Our MIM-based 3-input majority gate displays a highly improved overall area of only 0.636 μm 2 for a single-stage compared with previous works on plasmonic logic. The proposed device demonstrates non-Boolean computational capability and can find direct utility in highly parallel real-time signal processing applications like pattern recognition.

  1. Robotic neurorehabilitation: a computational motor learning perspective

    PubMed Central

    Huang, Vincent S; Krakauer, John W

    2009-01-01

    Conventional neurorehabilitation appears to have little impact on impairment over and above that of spontaneous biological recovery. Robotic neurorehabilitation has the potential for a greater impact on impairment due to easy deployment, its applicability across of a wide range of motor impairment, its high measurement reliability, and the capacity to deliver high dosage and high intensity training protocols. We first describe current knowledge of the natural history of arm recovery after stroke and of outcome prediction in individual patients. Rehabilitation strategies and outcome measures for impairment versus function are compared. The topics of dosage, intensity, and time of rehabilitation are then discussed. Robots are particularly suitable for both rigorous testing and application of motor learning principles to neurorehabilitation. Computational motor control and learning principles derived from studies in healthy subjects are introduced in the context of robotic neurorehabilitation. Particular attention is paid to the idea of context, task generalization and training schedule. The assumptions that underlie the choice of both movement trajectory programmed into the robot and the degree of active participation required by subjects are examined. We consider rehabilitation as a general learning problem, and examine it from the perspective of theoretical learning frameworks such as supervised and unsupervised learning. We discuss the limitations of current robotic neurorehabilitation paradigms and suggest new research directions from the perspective of computational motor learning. PMID:19243614

  2. The M-Integral for Computing Stress Intensity Factors in Generally Anisotropic Materials

    NASA Technical Reports Server (NTRS)

    Warzynek, P. A.; Carter, B. J.; Banks-Sills, L.

    2005-01-01

    The objective of this project is to develop and demonstrate a capability for computing stress intensity factors in generally anisotropic materials. These objectives have been met. The primary deliverable of this project is this report and the information it contains. In addition, we have delivered the source code for a subroutine that will compute stress intensity factors for anisotropic materials encoded in both the C and Python programming languages and made available a version of the FRANC3D program that incorporates this subroutine. Single crystal super alloys are commonly used for components in the hot sections of contemporary jet and rocket engines. Because these components have a uniform atomic lattice orientation throughout, they exhibit anisotropic material behavior. This means that stress intensity solutions developed for isotropic materials are not appropriate for the analysis of crack growth in these materials. Until now, a general numerical technique did not exist for computing stress intensity factors of cracks in anisotropic materials and cubic materials in particular. Such a capability was developed during the project and is described and demonstrated herein.

  3. Effects of Computer-Based Practice on the Acquisition and Maintenance of Basic Academic Skills for Children with Moderate to Intensive Educational Needs

    ERIC Educational Resources Information Center

    Everhart, Julie M.; Alber-Morgan, Sheila R.; Park, Ju Hee

    2011-01-01

    This study investigated the effects of computer-based practice on the acquisition and maintenance of basic academic skills for two children with moderate to intensive disabilities. The special education teacher created individualized computer games that enabled the participants to independently practice academic skills that corresponded with their…

  4. Tracking real-time neural activation of conceptual knowledge using single-trial event-related potentials.

    PubMed

    Amsel, Ben D

    2011-04-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Application verification research of cloud computing technology in the field of real time aerospace experiment

    NASA Astrophysics Data System (ADS)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  6. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    PubMed

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  7. The importance of ray pathlengths when measuring objects in maximum intensity projection images.

    PubMed

    Schreiner, S; Dawant, B M; Paschal, C B; Galloway, R L

    1996-01-01

    It is important to understand any process that affects medical data. Once the data have changed from the original form, one must consider the possibility that the information contained in the data has also changed. In general, false negative and false positive diagnoses caused by this post-processing must be minimized. Medical imaging is one area in which post-processing is commonly performed, but there is often little or no discussion of how these algorithms affect the data. This study uncovers some interesting properties of maximum intensity projection (MIP) algorithms which are commonly used in the post-processing of magnetic resonance (MR) and computed tomography (CT) angiographic data. The appearance of the width of vessels and the extent of malformations such as aneurysms is of interest to clinicians. This study will show how MIP algorithms interact with the shape of the object being projected. MIP's can make objects appear thinner in the projection than in the original data set and also alter the shape of the profile of the object seen in the original data. These effects have consequences for width-measuring algorithms which will be discussed. Each projected intensity is dependent upon the pathlength of the ray from which the projected pixel arises. The morphology (shape and intensity profile) of an object will change the pathlength that each ray experiences. This is termed the pathlength effect. In order to demonstrate the pathlength effect, simple computer models of an imaged vessel were created. Additionally, a static MR phantom verified that the derived equation for the projection-plane probability density function (pdf) predicts the projection-plane intensities well (R(2)=0.96). Finally, examples of projections through in vivo MR angiography and CT angiography data are presented.

  8. Analysis of intensity variability in multislice and cone beam computed tomography.

    PubMed

    Nackaerts, Olivia; Maes, Frederik; Yan, Hua; Couto Souza, Paulo; Pauwels, Ruben; Jacobs, Reinhilde

    2011-08-01

    The aim of this study was to evaluate the variability of intensity values in cone beam computed tomography (CBCT) imaging compared with multislice computed tomography Hounsfield units (MSCT HU) in order to assess the reliability of density assessments using CBCT images. A quality control phantom was scanned with an MSCT scanner and five CBCT scanners. In one CBCT scanner, the phantom was scanned repeatedly in the same and in different positions. Images were analyzed using registration to a mathematical model. MSCT images were used as a reference. Density profiles of MSCT showed stable HU values, whereas in CBCT imaging the intensity values were variable over the profile. Repositioning of the phantom resulted in large fluctuations in intensity values. The use of intensity values in CBCT images is not reliable, because the values are influenced by device, imaging parameters and positioning. © 2011 John Wiley & Sons A/S.

  9. Toward reliable characterization of functional homogeneity in the human brain: Preprocessing, scan duration, imaging resolution and computational space

    PubMed Central

    Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F. Xavier; Milham, Michael P.

    2013-01-01

    While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test–retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo’s TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). PMID:23085497

  10. Simulation of the modulation transfer function dependent on the partial Fourier fraction in dynamic contrast enhancement magnetic resonance imaging.

    PubMed

    Takatsu, Yasuo; Ueyama, Tsuyoshi; Miyati, Tosiaki; Yamamura, Kenichirou

    2016-12-01

    The image characteristics in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) depend on the partial Fourier fraction and contrast medium concentration. These characteristics were assessed and the modulation transfer function (MTF) was calculated by computer simulation. A digital phantom was created from signal intensity data acquired at different contrast medium concentrations on a breast model. The frequency images [created by fast Fourier transform (FFT)] were divided into 512 parts and rearranged to form a new image. The inverse FFT of this image yielded the MTF. From the reference data, three linear models (low, medium, and high) and three exponential models (slow, medium, and rapid) of the signal intensity were created. Smaller partial Fourier fractions, and higher gradients in the linear models, corresponded to faster MTF decline. The MTF more gradually decreased in the exponential models than in the linear models. The MTF, which reflects the image characteristics in DCE-MRI, was more degraded as the partial Fourier fraction decreased.

  11. 2D Process-based Microbialite Growth Model

    NASA Astrophysics Data System (ADS)

    Airo, A.; Smith, A.

    2007-12-01

    A 2D process-based microbialite growth model (MGM) has been developed that integrates the coupled effects of the microbialite growth and sediment distribution within a two-dimensional cross-section of a subaqueous bedrock profile. Sediment transport is realized through particle erosion and deposition that are a function of local wave energy which is computed on the basis of linear wave theory. Surface-normal microbialite growth is directly correlated to light intensity, which is computed for every point of the microbialite surface by using a Henyey- Greenstein-type relation for scattering and the Beer's Law for absorption in the water column. Shadowing effects by surrounding obstacles and/or overlying sediment are also considered. Sediment particles can be incorporated into the microbialite framework if growth occurs in the presence of sediment. The resulting meter-size microbialite constructs develop morphologies that correspond well to natural microbialites. Furthermore, changes of environmental factors such as light intensity, wave energy, and bedrock profile result in morphological variations of the microbialites that would be expected on the basis of the current understanding of microbialite growth and development.

  12. Conjugate gradient minimisation approach to generating holographic traps for ultracold atoms.

    PubMed

    Harte, Tiffany; Bruce, Graham D; Keeling, Jonathan; Cassettari, Donatella

    2014-11-03

    Direct minimisation of a cost function can in principle provide a versatile and highly controllable route to computational hologram generation. Here we show that the careful design of cost functions, combined with numerically efficient conjugate gradient minimisation, establishes a practical method for the generation of holograms for a wide range of target light distributions. This results in a guided optimisation process, with a crucial advantage illustrated by the ability to circumvent optical vortex formation during hologram calculation. We demonstrate the implementation of the conjugate gradient method for both discrete and continuous intensity distributions and discuss its applicability to optical trapping of ultracold atoms.

  13. Optical response tuning in nanorod-on-semicontinous film systems: A computational study

    NASA Astrophysics Data System (ADS)

    Mokkath, Junais Habeeb

    2018-01-01

    Strongly confined and intense optical fields within the plasmonic metal nanocavities show outstanding potential for a wide range of functionalities in nanophotonics. Using time dependent density functional theory calculations, we investigate the optical response evolution as a function of the gap separation distances in nanorod-on-film systems comprised of a nanorod (NR) made of Al or Na on top of an Al film. Huge optical field modulations emerged in the chemically distinct Na NR - Al film system in comparison to the Al NR - Al film system, indicating the vital role of metals involved. We further study the optical response modifications by placing a conducting molecule in the gap region, finding strong spectral modulations via through-molecule electron tunneling.

  14. Measurements of geomagnetically trapped alpha particles, 1968-1970. I - Quiet time distributions

    NASA Technical Reports Server (NTRS)

    Krimigis, S. M.; Verzariu, P.

    1973-01-01

    Results of observations of geomagnetically trapped alpha particles over the energy range from 1.18 to 8 MeV performed with the aid of the Injun 5 polar-orbiting satellite during the period from September 1968 to May 1970. Following a presentation of a time history covering this entire period, a detailed analysis is made of the magnetically quiet period from Feb. 11 to 28, 1970. During this period the alpha particle fluxes and the intensity ratio of alpha particles to protons attained their lowest values in approximately 20 months; the alpha particle intensity versus L profile was most similar to the proton profile at the same energy per nucleon interval; the intensity ratio was nearly constant as a function of L in the same energy per nucleon representation, but rose sharply with L when computed in the same total energy interval; the variation of alpha particle intensity with B suggested a steep angular distribution at small equatorial pitch angles, while the intensity ratio showed little dependence on B; and the alpha particle spectral parameter showed a markedly different dependence on L from the equivalent one for protons.

  15. Effects of virtual reality immersion and audiovisual distraction techniques for patients with pruritus

    PubMed Central

    Leibovici, Vera; Magora, Florella; Cohen, Sarale; Ingber, Arieh

    2009-01-01

    BACKGROUND: Virtual reality immersion (VRI), an advanced computer-generated technique, decreased subjective reports of pain in experimental and procedural medical therapies. Furthermore, VRI significantly reduced pain-related brain activity as measured by functional magnetic resonance imaging. Resemblance between anatomical and neuroendocrine pathways of pain and pruritus may prove VRI to be a suitable adjunct for basic and clinical studies of the complex aspects of pruritus. OBJECTIVES: To compare effects of VRI with audiovisual distraction (AVD) techniques for attenuation of pruritus in patients with atopic dermatitis and psoriasis vulgaris. METHODS: Twenty-four patients suffering from chronic pruritus – 16 due to atopic dermatitis and eight due to psoriasis vulgaris – were randomly assigned to play an interactive computer game using a special visor or a computer screen. Pruritus intensity was self-rated before, during and 10 min after exposure using a visual analogue scale ranging from 0 to 10. The interviewer rated observed scratching on a three-point scale during each distraction program. RESULTS: Student’s t tests were significant for reduction of pruritus intensity before and during VRI and AVD (P=0.0002 and P=0.01, respectively) and were significant only between ratings before and after VRI (P=0.017). Scratching was mostly absent or mild during both programs. CONCLUSIONS: VRI and AVD techniques demonstrated the ability to diminish itching sensations temporarily. Further studies on the immediate and late effects of interactive computer distraction techniques to interrupt itching episodes will open potential paths for future pruritus research. PMID:19714267

  16. Approximate Bayesian Computation in the estimation of the parameters of the Forbush decrease model

    NASA Astrophysics Data System (ADS)

    Wawrzynczak, A.; Kopka, P.

    2017-12-01

    Realistic modeling of the complicated phenomena as Forbush decrease of the galactic cosmic ray intensity is a quite challenging task. One aspect is a numerical solution of the Fokker-Planck equation in five-dimensional space (three spatial variables, the time and particles energy). The second difficulty arises from a lack of detailed knowledge about the spatial and time profiles of the parameters responsible for the creation of the Forbush decrease. Among these parameters, the central role plays a diffusion coefficient. Assessment of the correctness of the proposed model can be done only by comparison of the model output with the experimental observations of the galactic cosmic ray intensity. We apply the Approximate Bayesian Computation (ABC) methodology to match the Forbush decrease model to experimental data. The ABC method is becoming increasing exploited for dynamic complex problems in which the likelihood function is costly to compute. The main idea of all ABC methods is to accept samples as an approximate posterior draw if its associated modeled data are close enough to the observed one. In this paper, we present application of the Sequential Monte Carlo Approximate Bayesian Computation algorithm scanning the space of the diffusion coefficient parameters. The proposed algorithm is adopted to create the model of the Forbush decrease observed by the neutron monitors at the Earth in March 2002. The model of the Forbush decrease is based on the stochastic approach to the solution of the Fokker-Planck equation.

  17. Computation of Nonlinear Hydrodynamic Loads on Floating Wind Turbines Using Fluid-Impulse Theory: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kok Yan Chan, G.; Sclavounos, P. D.; Jonkman, J.

    2015-04-02

    A hydrodynamics computer module was developed for the evaluation of the linear and nonlinear loads on floating wind turbines using a new fluid-impulse formulation for coupling with the FAST program. The recently developed formulation allows the computation of linear and nonlinear loads on floating bodies in the time domain and avoids the computationally intensive evaluation of temporal and nonlinear free-surface problems and efficient methods are derived for its computation. The body instantaneous wetted surface is approximated by a panel mesh and the discretization of the free surface is circumvented by using the Green function. The evaluation of the nonlinear loadsmore » is based on explicit expressions derived by the fluid-impulse theory, which can be computed efficiently. Computations are presented of the linear and nonlinear loads on the MIT/NREL tension-leg platform. Comparisons were carried out with frequency-domain linear and second-order methods. Emphasis was placed on modeling accuracy of the magnitude of nonlinear low- and high-frequency wave loads in a sea state. Although fluid-impulse theory is applied to floating wind turbines in this paper, the theory is applicable to other offshore platforms as well.« less

  18. Mobile computing device configured to compute irradiance, glint, and glare of the sun

    DOEpatents

    Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

    2014-03-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

  19. Energy intensity of computer manufacturing: hybrid assessment combining process and economic input-output methods.

    PubMed

    Williams, Eric

    2004-11-15

    The total energy and fossil fuels used in producing a desktop computer with 17-in. CRT monitor are estimated at 6400 megajoules (MJ) and 260 kg, respectively. This indicates that computer manufacturing is energy intensive: the ratio of fossil fuel use to product weight is 11, an order of magnitude larger than the factor of 1-2 for many other manufactured goods. This high energy intensity of manufacturing, combined with rapid turnover in computers, results in an annual life cycle energy burden that is surprisingly high: about 2600 MJ per year, 1.3 times that of a refrigerator. In contrast with many home appliances, life cycle energy use of a computer is dominated by production (81%) as opposed to operation (19%). Extension of usable lifespan (e.g. by reselling or upgrading) is thus a promising approach to mitigating energy impacts as well as other environmental burdens associated with manufacturing and disposal.

  20. An Ab Initio Based Potential Energy Surface for Water

    NASA Technical Reports Server (NTRS)

    Partridge, Harry; Schwenke, David W.; Langhoff, Stephen R. (Technical Monitor)

    1996-01-01

    We report a new determination of the water potential energy surface. A high quality ab initio potential energy surface (PES) and dipole moment function of water have been computed. This PES is empirically adjusted to improve the agreement between the computed line positions and those from the HITRAN 92 data base. The adjustment is small, nonetheless including an estimate of core (oxygen 1s) electron correlation greatly improves the agreement with experiment. Of the 27,245 assigned transitions in the HITRAN 92 data base for H2(O-16), the overall root mean square (rms) deviation between the computed and observed line positions is 0.125/cm. However the deviations do not correspond to a normal distribution: 69% of the lines have errors less than 0.05/cm. Overall, the agreement between the line intensities computed in the present work and those contained in the data base is quite good, however there are a significant number of line strengths which differ greatly.

  1. Computational attributes of the integral form of the equation of transfer

    NASA Technical Reports Server (NTRS)

    Frankel, J. I.

    1991-01-01

    Difficulties can arise in radiative and neutron transport calculations when a highly anisotropic scattering phase function is present. In the presence of anisotropy, currently used numerical solutions are based on the integro-differential form of the linearized Boltzmann transport equation. This paper, departs from classical thought and presents an alternative numerical approach based on application of the integral form of the transport equation. Use of the integral formalism facilitates the following steps: a reduction in dimensionality of the system prior to discretization, the use of symbolic manipulation to augment the computational procedure, and the direct determination of key physical quantities which are derivable through the various Legendre moments of the intensity. The approach is developed in the context of radiative heat transfer in a plane-parallel geometry, and results are presented and compared with existing benchmark solutions. Encouraging results are presented to illustrate the potential of the integral formalism for computation. The integral formalism appears to possess several computational attributes which are well-suited to radiative and neutron transport calculations.

  2. RAINLINK: Retrieval algorithm for rainfall monitoring employing microwave links from a cellular communication network

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Overeem, A.; Leijnse, H.; Rios Gaona, M. F.

    2017-12-01

    The basic principle of rainfall estimation using microwave links is as follows. Rainfall attenuates the electromagnetic signals transmitted from one telephone tower to another. By measuring the received power at one end of a microwave link as a function of time, the path-integrated attenuation due to rainfall can be calculated, which can be converted to average rainfall intensities over the length of a link. Microwave links from cellular communication networks have been proposed as a promising new rainfall measurement technique for one decade. They are particularly interesting for those countries where few surface rainfall observations are available. Yet to date no operational (real-time) link-based rainfall products are available. To advance the process towards operational application and upscaling of this technique, there is a need for freely available, user-friendly computer code for microwave link data processing and rainfall mapping. Such software is now available as R package "RAINLINK" on GitHub (https://github.com/overeem11/RAINLINK). It contains a working example to compute link-based 15-min rainfall maps for the entire surface area of The Netherlands for 40 hours from real microwave link data. This is a working example using actual data from an extensive network of commercial microwave links, for the first time, which will allow users to test their own algorithms and compare their results with ours. The package consists of modular functions, which facilitates running only part of the algorithm. The main processings steps are: 1) Preprocessing of link data (initial quality and consistency checks); 2) Wet-dry classification using link data; 3) Reference signal determination; 4) Removal of outliers ; 5) Correction of received signal powers; 6) Computation of mean path-averaged rainfall intensities; 7) Interpolation of rainfall intensities ; 8) Rainfall map visualisation. Some applications of RAINLINK will be shown based on microwave link data from a temperate climate (the Netherlands), and from a subtropical climate (Brazil). We hope that RAINLINK will promote the application of rainfall monitoring using microwave links in poorly gauged regions around the world. We invite researchers to contribute to RAINLINK to make the code more generally applicable to data from different networks and climates.

  3. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazareth, D; Spaans, J

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less

  4. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing

    PubMed Central

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-01

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343

  5. Eigen Spreading

    DTIC Science & Technology

    2008-02-27

    between the PHY layer and for example a host PC computer . The PC wants to generate and receive a sequence of data packets. The PC may also want to send...the testbed is quite similar. Given the intense computational requirements of SVD and other matrix mode operations needed to support eigen spreading a...platform for real time operation. This task is probably the major challenge in the development of the testbed. All compute intensive tasks will be

  6. Empowering line intensity mapping to study early galaxies

    NASA Astrophysics Data System (ADS)

    Comaschi, P.; Ferrara, A.

    2016-12-01

    Line intensity mapping is a superb tool to study the collective radiation from early galaxies. However, the method is hampered by the presence of strong foregrounds, mostly produced by low-redshift interloping lines. We present here a general method to overcome this problem which is robust against foreground residual noise and based on the cross-correlation function ψαL(r) between diffuse line emission and Lyα emitters (LAE). We compute the diffuse line (Lyα is used as an example) emission from galaxies in a (800 Mpc)3 box at z = 5.7 and 6.6. We divide the box in slices and populate them with 14 000(5500) LAEs at z = 5.7(6.6), considering duty cycles from 10-3 to 1. Both the LAE number density and slice volume are consistent with the expected outcome of the Subaru Hyper Suprime Cam survey. We add Gaussian random noise with variance σN up to 100 times the variance of the Lyα emission, σα, to simulate residual foregrounds and compute ψαL(r). We find that the signal-to-noise ratio of the observed ψαL(r) does not change significantly if σN ≤ 10σα and show that in these conditions the mean line intensity, ILyα, can be precisely recovered independently of the LAE duty cycle. Even if σN = 100σα, Iα can be constrained within a factor 2. The method works equally well for any other line (e.g. [C II], He II) used for the intensity-mapping experiment.

  7. Electromagnetic field scattering by a triangular aperture.

    PubMed

    Harrison, R E; Hyman, E

    1979-03-15

    The multiple Laplace transform has been applied to analysis and computation of scattering by a double triangular aperture. Results are obtained which match far-field intensity distributions observed in experiments. Arbitrary polarization components, as well as in-phase and quadrature-phase components, may be determined, in the transform domain, as a continuous function of distance from near to far-field for any orientation, aperture, and transformable waveform. Numerical results are obtained by application of numerical multiple inversions of the fully transformed solution.

  8. Cross-wind profiling based on the scattered wave scintillation in a telescope focus.

    PubMed

    Banakh, V A; Marakasov, D A; Vorontsov, M A

    2007-11-20

    The problem of wind profile reconstruction from scintillation of an optical wave scattered off a rough surface in a telescope focus plane is considered. Both the expression for the spatiotemporal correlation function and the algorithm of cross-wind velocity and direction profiles reconstruction based on the spatiotemporal spectrum of intensity of an optical wave scattered by a diffuse target in a turbulent atmosphere are presented. Computer simulations performed under conditions of weak optical turbulence show wind profiles reconstruction by the developed algorithm.

  9. Hybrid supply chain model for material requirement planning under financial constraints: A case study

    NASA Astrophysics Data System (ADS)

    Curci, Vita; Dassisti, Michele; Josefa, Mula Bru; Manuel, Díaz Madroñero

    2014-10-01

    Supply chain model (SCM) are potentially capable to integrate different aspects in supporting decision making for enterprise management tasks. The aim of the paper is to propose an hybrid mathematical programming model for optimization of production requirements resources planning. The preliminary model was conceived bottom-up from a real industrial case analysed oriented to maximize cash flow. Despite the intense computational effort required to converge to a solution, optimisation done brought good result in solving the objective function.

  10. MIRO Computational Model

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2010-01-01

    A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.

  11. On the analysis of para-ammonia observations

    NASA Technical Reports Server (NTRS)

    Kuiper, T. B. H.

    1994-01-01

    The intensities and optical depths of the (1, 1), (2, 2), and (2, 1) inversion transitions of ammonia can be calculated quite accurately without solving the equations of statistical equilibrium. A two-temperature partition function suffices. The excitation of the K-ladders can be approximated by using a temperature obtained from a two-level model with the (2, 1) and (1, 1) levels. Distribution of populations between the ladders is described with the kinetic temperature. This enables one to compute the (1, 1) and (2, 1) inversion transition excitation temperatures and optical depths. To compute the (2, 2) brightness temperatures, the fractional population of the (2, 2) doublet is computed from the population of the (1, 1) doublet using the 'true rotation temperature,' which is calculated using a three-level model with the (2, 1), (2, 2), and (1, 1) levels. In spite of some iterative steps, the calculation is quite fast.

  12. Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia

    2011-03-01

    During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.

  13. A Real Time Controller For Applications In Smart Structures

    NASA Astrophysics Data System (ADS)

    Ahrens, Christian P.; Claus, Richard O.

    1990-02-01

    Research in smart structures, especially the area of vibration suppression, has warranted the investigation of advanced computing environments. Real time PC computing power has limited development of high order control algorithms. This paper presents a simple Real Time Embedded Control System (RTECS) in an application of Intelligent Structure Monitoring by way of modal domain sensing for vibration control. It is compared to a PC AT based system for overall functionality and speed. The system employs a novel Reduced Instruction Set Computer (RISC) microcontroller capable of 15 million instructions per second (MIPS) continuous performance and burst rates of 40 MIPS. Advanced Complimentary Metal Oxide Semiconductor (CMOS) circuits are integrated on a single 100 mm by 160 mm printed circuit board requiring only 1 Watt of power. An operating system written in Forth provides high speed operation and short development cycles. The system allows for implementation of Input/Output (I/O) intensive algorithms and provides capability for advanced system development.

  14. Data and results of a laboratory investigation of microprocessor upset caused by simulated lightning-induced analog transients

    NASA Technical Reports Server (NTRS)

    Belcastro, C. M.

    1984-01-01

    Advanced composite aircraft designs include fault-tolerant computer-based digital control systems with thigh reliability requirements for adverse as well as optimum operating environments. Since aircraft penetrate intense electromagnetic fields during thunderstorms, onboard computer systems maya be subjected to field-induced transient voltages and currents resulting in functional error modes which are collectively referred to as digital system upset. A methodology was developed for assessing the upset susceptibility of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general-purpose microprocessor were studied via tests which involved the random input of analog transients which model lightning-induced signals onto interface lines of an 8080-based microcomputer from which upset error data were recorded. The application of Markov modeling to upset susceptibility estimation is discussed and a stochastic model development.

  15. Green's function of multi-layered poroelastic half-space for models of ground vibration due to railway traffic

    NASA Astrophysics Data System (ADS)

    Wang, Futong; Tao, Xiaxin; Xie, Lili; Raj, Siddharthan

    2017-04-01

    This study proposes a Green's function, an essential representation of water-saturated ground under moving excitation, to simulate ground borne vibration from trains. First, general solutions to the governing equations of poroelastic medium are derived by means of integral transform. Secondly, the transmission and reflection matrix approach is used to formulate the relationship between displacement and stress of the stratified ground, which results in the matrix of the Green's function. Then the Green's function is combined into a train-track-ground model, and is verified by typical examples and a field test. Additional simulations show that the computed ground vibration attenuates faster in the immediate vicinity of the track than in the surrounding area. The wavelength of wheel-rail unevenness has a notable effect on computed displacement and pore pressure. The variation of vibration intensity with the depth of ground is significantly influenced by the layering of the strata soil. When the train speed is equal to the velocity of the Rayleigh wave, the Mach cone appears in the simulated wave field. The proposed Green's function is an appropriate representation for a layered ground with shallow ground water table, and will be helpful to understand the dynamic responses of the ground to complicated moving excitation.

  16. Multidisciplinary Design Optimization of a Full Vehicle with High Performance Computing

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Gu, L.; Tho, C. H.; Sobieszczanski-Sobieski, Jaroslaw

    2001-01-01

    Multidisciplinary design optimization (MDO) of a full vehicle under the constraints of crashworthiness, NVH (Noise, Vibration and Harshness), durability, and other performance attributes is one of the imperative goals for automotive industry. However, it is often infeasible due to the lack of computational resources, robust simulation capabilities, and efficient optimization methodologies. This paper intends to move closer towards that goal by using parallel computers for the intensive computation and combining different approximations for dissimilar analyses in the MDO process. The MDO process presented in this paper is an extension of the previous work reported by Sobieski et al. In addition to the roof crush, two full vehicle crash modes are added: full frontal impact and 50% frontal offset crash. Instead of using an adaptive polynomial response surface method, this paper employs a DOE/RSM method for exploring the design space and constructing highly nonlinear crash functions. Two NMO strategies are used and results are compared. This paper demonstrates that with high performance computing, a conventionally intractable real world full vehicle multidisciplinary optimization problem considering all performance attributes with large number of design variables become feasible.

  17. Silicon photonic integrated circuits with electrically programmable non-volatile memory functions.

    PubMed

    Song, J-F; Lim, A E-J; Luo, X-S; Fang, Q; Li, C; Jia, L X; Tu, X-G; Huang, Y; Zhou, H-F; Liow, T-Y; Lo, G-Q

    2016-09-19

    Conventional silicon photonic integrated circuits do not normally possess memory functions, which require on-chip power in order to maintain circuit states in tuned or field-configured switching routes. In this context, we present an electrically programmable add/drop microring resonator with a wavelength shift of 426 pm between the ON/OFF states. Electrical pulses are used to control the choice of the state. Our experimental results show a wavelength shift of 2.8 pm/ms and a light intensity variation of ~0.12 dB/ms for a fixed wavelength in the OFF state. Theoretically, our device can accommodate up to 65 states of multi-level memory functions. Such memory functions can be integrated into wavelength division mutiplexing (WDM) filters and applied to optical routers and computing architectures fulfilling large data downloading demands.

  18. Cloud-based Jupyter Notebooks for Water Data Analysis

    NASA Astrophysics Data System (ADS)

    Castronova, A. M.; Brazil, L.; Seul, M.

    2017-12-01

    The development and adoption of technologies by the water science community to improve our ability to openly collaborate and share workflows will have a transformative impact on how we address the challenges associated with collaborative and reproducible scientific research. Jupyter notebooks offer one solution by providing an open-source platform for creating metadata-rich toolchains for modeling and data analysis applications. Adoption of this technology within the water sciences, coupled with publicly available datasets from agencies such as USGS, NASA, and EPA enables researchers to easily prototype and execute data intensive toolchains. Moreover, implementing this software stack in a cloud-based environment extends its native functionality to provide researchers a mechanism to build and execute toolchains that are too large or computationally demanding for typical desktop computers. Additionally, this cloud-based solution enables scientists to disseminate data processing routines alongside journal publications in an effort to support reproducibility. For example, these data collection and analysis toolchains can be shared, archived, and published using the HydroShare platform or downloaded and executed locally to reproduce scientific analysis. This work presents the design and implementation of a cloud-based Jupyter environment and its application for collecting, aggregating, and munging various datasets in a transparent, sharable, and self-documented manner. The goals of this work are to establish a free and open source platform for domain scientists to (1) conduct data intensive and computationally intensive collaborative research, (2) utilize high performance libraries, models, and routines within a pre-configured cloud environment, and (3) enable dissemination of research products. This presentation will discuss recent efforts towards achieving these goals, and describe the architectural design of the notebook server in an effort to support collaborative and reproducible science.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiong, Hui; Mignolet, Benoit; Fang, Li

    The interaction of gas phase endohedral fullerene Ho3N@C80 with intense (0.1–5 × 10 14 W/cm 2), short (30 fs), 800 nm laser pulses was investigated. The power law dependence of Ho 3N@C 80 q+, q = 1–2, was found to be different from that of C 60. Time-dependent density functional theory computations revealed different light-induced ionization mechanisms. Unlike in C 60, in doped fullerenes, the breaking of the cage spherical symmetry makes super atomic molecular orbital (SAMO) states optically active. Theoretical calculations suggest that the fast ionization of the SAMO states in Ho 3N@C 80 is responsible for the nmore » = 3 power law for singly charged parent molecules at intensities lower than 1.2 × 10 14 W/cm 2.« less

  20. The Integrated Radiation Mapper Assistant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, R.E.; Tripp, L.R.

    1995-03-01

    The Integrated Radiation Mapper Assistant (IRMA) system combines state-of-the-art radiation sensors and microprocessor based analysis techniques to perform radiation surveys. Control of the survey function is from a control station located outside the radiation thus reducing time spent in radiation areas performing radiation surveys. The system consists of a directional radiation sensor, a laser range finder, two area radiation sensors, and a video camera mounted on a pan and tilt platform. THis sensor package is deployable on a remotely operated vehicle. The outputs of the system are radiation intensity maps identifying both radiation source intensities and radiation levels throughout themore » room being surveyed. After completion of the survey, the data can be removed from the control station computer for further analysis or archiving.« less

  1. Formulation of a strategy for monitoring control integrity in critical digital control systems

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe

    1991-01-01

    Advanced aircraft will require flight critical computer systems for stability augmentation as well as guidance and control that must perform reliably in adverse, as well as nominal, operating environments. Digital system upset is a functional error mode that can occur in electromagnetically harsh environments, involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. A strategy is presented for dynamic upset detection to be used in the evaluation of critical digital controllers during the design and/or validation phases of development. Critical controllers must be able to be used in adverse environments that result from disturbances caused by an electromagnetic source such as lightning, high intensity radiated field (HIRF), and nuclear electromagnetic pulses (NEMP). The upset detection strategy presented provides dynamic monitoring of a given control computer for degraded functional integrity that can result from redundancy management errors and control command calculation error that could occur in an electromagnetically harsh operating environment. The use is discussed of Kalman filtering, data fusion, and decision theory in monitoring a given digital controller for control calculation errors, redundancy management errors, and control effectiveness.

  2. Nonlinear multilayers as optical limiters

    NASA Astrophysics Data System (ADS)

    Turner-Valle, Jennifer Anne

    1998-10-01

    In this work we present a non-iterative technique for computing the steady-state optical properties of nonlinear multilayers and we examine nonlinear multilayer designs for optical limiters. Optical limiters are filters with intensity-dependent transmission designed to curtail the transmission of incident light above a threshold irradiance value in order to protect optical sensors from damage due to intense light. Thin film multilayers composed of nonlinear materials exhibiting an intensity-dependent refractive index are used as the basis for optical limiter designs in order to enhance the nonlinear filter response by magnifying the electric field in the nonlinear materials through interference effects. The nonlinear multilayer designs considered in this work are based on linear optical interference filter designs which are selected for their spectral properties and electric field distributions. Quarter wave stacks and cavity filters are examined for their suitability as sensor protectors and their manufacturability. The underlying non-iterative technique used to calculate the optical response of these filters derives from recognizing that the multi-valued calculation of output irradiance as a function of incident irradiance may be turned into a single-valued calculation of incident irradiance as a function of output irradiance. Finally, the benefits and drawbacks of using nonlinear multilayer for optical limiting are examined and future research directions are proposed.

  3. Typical versus delayed speech onset influences verbal reporting of autistic interests.

    PubMed

    Chiodo, Liliane; Majerus, Steve; Mottron, Laurent

    2017-01-01

    The distinction between autism and Asperger syndrome has been abandoned in the DSM-5. However, this clinical categorization largely overlaps with the presence or absence of a speech onset delay which is associated with clinical, cognitive, and neural differences. It is unknown whether these different speech development pathways and associated cognitive differences are involved in the heterogeneity of the restricted interests that characterize autistic adults. This study tested the hypothesis that speech onset delay, or conversely, early mastery of speech, orients the nature and verbal reporting of adult autistic interests. The occurrence of a priori defined descriptors for perceptual and thematic dimensions were determined, as well as the perceived function and benefits, in the response of autistic people to a semi-structured interview on their intense interests. The number of words, grammatical categories, and proportion of perceptual / thematic descriptors were computed and compared between groups by variance analyses. The participants comprised 40 autistic adults grouped according to the presence ( N  = 20) or absence ( N  = 20) of speech onset delay, as well as 20 non-autistic adults, also with intense interests, matched for non-verbal intelligence using Raven's Progressive Matrices. The overall nature, function, and benefit of intense interests were similar across autistic subgroups, and between autistic and non-autistic groups. However, autistic participants with a history of speech onset delay used more perceptual than thematic descriptors when talking about their interests, whereas the opposite was true for autistic individuals without speech onset delay. This finding remained significant after controlling for linguistic differences observed between the two groups. Verbal reporting, but not the nature or positive function, of intense interests differed between adult autistic individuals depending on their speech acquisition history: oral reporting of intense interests was characterized by perceptual dominance for autistic individuals with delayed speech onset and thematic dominance for those without. This may contribute to the heterogeneous presentation observed among autistic adults of normal intelligence.

  4. Theoretical aspects of studies of oxide and semiconductor surfaces using low energy positrons

    NASA Astrophysics Data System (ADS)

    Fazleev, N. G.; Maddox, W. B.; Weiss, A. H.

    2011-01-01

    This paper presents the results of a theoretical study of positron surface and bulk states and annihilation characteristics of surface trapped positrons at the oxidized Cu(100) single crystal and at both As- and Ga-rich reconstructed GaAs(100) surfaces. The variations in atomic structure and chemical composition of the topmost layers of the surfaces associated with oxidation and reconstructions and the charge redistribution at the surfaces are found to affect localization and spatial extent of the positron surface-state wave functions. The computed positron binding energy, work function, and annihilation characteristics reveal their sensitivity to charge transfer effects, atomic structure and chemical composition of the topmost layers of the surfaces. Theoretical positron annihilation probabilities with relevant core electrons computed for the oxidized Cu(100) surface and the As- and Ga-rich reconstructed GaAs(100) surfaces are compared with experimental ones estimated from the positron annihilation induced Auger peak intensities measured from these surfaces.

  5. Analysis of positron lifetime spectra in polymers

    NASA Technical Reports Server (NTRS)

    Singh, Jag J.; Mall, Gerald H.; Sprinkle, Danny R.

    1988-01-01

    A new procedure for analyzing multicomponent positron lifetime spectra in polymers was developed. It requires initial estimates of the lifetimes and the intensities of various components, which are readily obtainable by a standard spectrum stripping process. These initial estimates, after convolution with the timing system resolution function, are then used as the inputs for a nonlinear least squares analysis to compute the estimates that conform to a global error minimization criterion. The convolution integral uses the full experimental resolution function, in contrast to the previous studies where analytical approximations of it were utilized. These concepts were incorporated into a generalized Computer Program for Analyzing Positron Lifetime Spectra (PAPLS) in polymers. Its validity was tested using several artificially generated data sets. These data sets were also analyzed using the widely used POSITRONFIT program. In almost all cases, the PAPLS program gives closer fit to the input values. The new procedure was applied to the analysis of several lifetime spectra measured in metal ion containing Epon-828 samples. The results are described.

  6. Adaptively combined FIR and functional link artificial neural network equalizer for nonlinear communication channel.

    PubMed

    Zhao, Haiquan; Zhang, Jiashu

    2009-04-01

    This paper proposes a novel computational efficient adaptive nonlinear equalizer based on combination of finite impulse response (FIR) filter and functional link artificial neural network (CFFLANN) to compensate linear and nonlinear distortions in nonlinear communication channel. This convex nonlinear combination results in improving the speed while retaining the lower steady-state error. In addition, since the CFFLANN needs not the hidden layers, which exist in conventional neural-network-based equalizers, it exhibits a simpler structure than the traditional neural networks (NNs) and can require less computational burden during the training mode. Moreover, appropriate adaptation algorithm for the proposed equalizer is derived by the modified least mean square (MLMS). Results obtained from the simulations clearly show that the proposed equalizer using the MLMS algorithm can availably eliminate various intensity linear and nonlinear distortions, and be provided with better anti-jamming performance. Furthermore, comparisons of the mean squared error (MSE), the bit error rate (BER), and the effect of eigenvalue ratio (EVR) of input correlation matrix are presented.

  7. A simple low-computation-intensity model for approximating the distribution function of a sum of non-identical lognormals for financial applications

    NASA Astrophysics Data System (ADS)

    Messica, A.

    2016-10-01

    The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.

  8. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  9. Understanding Coronal Heating through Time-Series Analysis and Nanoflare Modeling

    NASA Astrophysics Data System (ADS)

    Romich, Kristine; Viall, Nicholeen

    2018-01-01

    Periodic intensity fluctuations in coronal loops, a signature of temperature evolution, have been observed using the Atmospheric Imaging Assembly (AIA) aboard NASA’s Solar Dynamics Observatory (SDO) spacecraft. We examine the proposal that nanoflares, or impulsive bursts of energy release in the solar atmosphere, are responsible for the intensity fluctuations as well as the megakelvin-scale temperatures observed in the corona. Drawing on the work of Cargill (2014) and Bradshaw & Viall (2016), we develop a computer model of the energy released by a sequence of nanoflare events in a single magnetic flux tube. We then use EBTEL (Enthalpy-Based Thermal Evolution of Loops), a hydrodynamic model of plasma response to energy input, to simulate intensity as a function of time across the coronal AIA channels. We test the EBTEL output for periodicities using a spectral code based on Mann and Lees’ (1996) multitaper method and present preliminary results here. Our ultimate goal is to establish whether quasi-continuous or impulsive energy bursts better approximate the original SDO data.

  10. Aerial radiometric and magnetic survey: Aztec National Topographic Map, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-01-01

    The results of analyses of the airborne gamma radiation and total magnetic field survey flown for the region identified as the Aztec National Topographic Map NJ13-10 are presented. The airborne data gathered are reduced by ground computer facilities to yield profile plots of the basic uranium, thorium and potassium equivalent gamma radiation intensities, ratios of these intensities, aircraft altitude above the earth's surface, total gamma ray and earth's magnetic field intensity, correlated as a function of geologic units. The distribution of data within each geologic unit, for all surveyed map lines and tie lines, has been calculated and is included.more » Two sets of profiled data for each line are included, with one set displaying the above-cited data. The second set includes only flight line magnetic field, temperature, pressure, altitude data plus magnetic field data as measured at a base station. A general description of the area, including descriptions of the various geologic units and the corresponding airborne data, is included also.« less

  11. Aerial radiometric and magnetic survey: Lander National Topographic Map, Wyoming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-01-01

    The results of analyses of the airborne gamma radiation and total magnetic field survey flown for the region identified as the Lander National Topographic Map NK12-6 are presented. The airborne data gathered are reduced by ground computer facilities to yield profile plots of the basic uranium, thorium and potassium equivalent gamma radiation intensities, ratios of these intensities, aircraft altitude above the earth's surface, total gamma ray and earth's magnetic field intensity, correlated as a function of geologic units. The distribution of data within each geologic unit, for all surveyed map lines and tie lines, has been calculated and is included.more » Two sets of profiled data for each line are included, with one set displaying the above-cited data. The second set includes only flight line magnetic field, temperature, pressure, altitude data plus magnetic field data as measured at a base station. A general description of the area, including descriptions of the various geologic units and the corresponding airborne data, is included also.« less

  12. First-principles study of excitonic effects in Raman intensities

    NASA Astrophysics Data System (ADS)

    Gillet, Yannick; Giantomassi, Matteo; Gonze, Xavier

    2013-09-01

    The ab initio prediction of Raman intensities for bulk solids usually relies on the hypothesis that the frequency of the incident laser light is much smaller than the band gap. However, when the photon frequency is a sizable fraction of the energy gap, or higher, resonance effects appear. In the case of silicon, when excitonic effects are neglected, the response of the solid to light increases by nearly three orders of magnitude in the range of frequencies between the static limit and the gap. When excitonic effects are taken into account, an additional tenfold increase in the intensity is observed. We include these effects using a finite-difference scheme applied on the dielectric function obtained by solving the Bethe-Salpeter equation. Our results for the Raman susceptibility of silicon show stronger agreement with experimental data compared with previous theoretical studies. For the sampling of the Brillouin zone, a double-grid technique is proposed, resulting in a significant reduction in computational effort.

  13. Anomolous Fatigue Crack Growth Phenomena in High-Strength Steel

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; James, Mark A.; Johnston, William M., Jr.; Newman, James C., Jr.

    2004-01-01

    The growth of a fatigue crack through a material is the result of a complex interaction between the applied loading, component geometry, three-dimensional constraint, load history, environment, material microstructure and several other factors. Previous studies have developed experimental and computational methods to relate the fatigue crack growth rate to many of the above conditions, with the intent of discovering some fundamental material response, i.e. crack growth rate as a function of something. Currently, the technical community uses the stress intensity factor solution as a simplistic means to relate fatigue crack growth rate to loading, geometry and all other variables. The stress intensity factor solution is a very simple linear-elastic representation of the continuum mechanics portion of crack growth. In this paper, the authors present fatigue crack growth rate data for two different high strength steel alloys generated using standard methods. The steels exhibit behaviour that appears unexplainable, compared to an aluminium alloy presented as a baseline for comparison, using the stress intensity factor solution.

  14. GREAT: a gradient-based color-sampling scheme for Retinex.

    PubMed

    Lecca, Michela; Rizzi, Alessandro; Serapioni, Raul Paolo

    2017-04-01

    Modeling the local color spatial distribution is a crucial step for the algorithms of the Milano Retinex family. Here we present GREAT, a novel, noise-free Milano Retinex implementation based on an image-aware spatial color sampling. For each channel of a color input image, GREAT computes a 2D set of edges whose magnitude exceeds a pre-defined threshold. Then GREAT re-scales the channel intensity of each image pixel, called target, by the average of the intensities of the selected edges weighted by a function of their positions, gradient magnitudes, and intensities relative to the target. In this way, GREAT enhances the input image, adjusting its brightness, contrast and dynamic range. The use of the edges as pixels relevant to color filtering is justified by the importance that edges play in human color sensation. The name GREAT comes from the expression "Gradient RElevAnce for ReTinex," which refers to the threshold-based definition of a gradient relevance map for edge selection and thus for image color filtering.

  15. Satisfaction with Life in Orofacial Pain Disorders: Associations and Theoretical Implications

    PubMed Central

    Boggero, Ian A.; Rojas-Ramirez, Marcia V.; de Leeuw, Reny; Carlson, Charles R.

    2016-01-01

    Aims To test if patients with masticatory myofascial pain, local myalgia, centrally mediated myalgia, disc displacement, capsulitis/synovitis, or continuous neuropathic pain differed in self-reported satisfaction with life. The study also tested if satisfaction with life was similarly predicted by measures of physical, emotional, and social functioning across disorders. Methods Satisfaction with life, fatigue, affective distress, social support, and pain data were extracted from the medical records of 343 patients seeking treatment for chronic orofacial pain. Patients were grouped by primary diagnosis assigned following their initial appointment. Satisfaction with life was compared between disorders, with and without pain intensity entered as a covariate. Disorder-specific linear regression models using physical, emotional, and social predictors of satisfaction with life were computed. Results Patients with centrally mediated myalgia reported significantly lower satisfaction with life than did patients with any of the other five disorders. Inclusion of pain intensity as a covariate weakened but did not eliminate the effect. Satisfaction with life was predicted by measures of physical, emotional, and social functioning, but these associations were not consistent across disorders. Conclusions Results suggest that reduced satisfaction with life in patients with centrally mediated myalgia is not due only to pain intensity. There may be other factors that predispose people to both reduced satisfaction with life and centrally mediated myalgia. Furthermore, the results suggest that satisfaction with life is differentially influenced by physical, emotional, and social functioning in different orofacial pain disorders. PMID:27128473

  16. Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing

    NASA Astrophysics Data System (ADS)

    Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey

    Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.

  17. Toward reliable characterization of functional homogeneity in the human brain: preprocessing, scan duration, imaging resolution and computational space.

    PubMed

    Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F Xavier; Milham, Michael P

    2013-01-15

    While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test-retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo's TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). Copyright © 2012 Elsevier Inc. All rights reserved.

  18. New correction procedures for the fast field program which extend its range

    NASA Technical Reports Server (NTRS)

    West, M.; Sack, R. A.

    1990-01-01

    A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.

  19. Perspective: A controversial benchmark system for water-oxide interfaces: H2O/TiO2(110)

    NASA Astrophysics Data System (ADS)

    Diebold, Ulrike

    2017-07-01

    The interaction of water with the single-crystalline rutile TiO2(110) surface has been the object of intense investigations with both experimental and computational methods. Not only is TiO2(110) widely considered the prototypical oxide surface, its interaction with water is also important in many applications where this material is used. At first, experimental measurements were hampered by the fact that preparation recipes for well-controlled surfaces had yet to be developed, but clear experimental evidence that water dissociation at defects including oxygen vacancies and steps emerged. For a perfect TiO2(110) surface, however, an intense debate has evolved whether or not water adsorbs as an intact molecule or if it dissociates by donating a proton to a so-called bridge-bonded surface oxygen atom. Computational studies agree that the energy difference between these two states is very small and thus depends sensitively on the computational setup and on the approximations used in density functional theory (DFT). While a recent molecular beam/STM experiment [Z.-T. Wang et al., Proc. Natl. Acad. Sci. U. S. A. 114(8), 1801-1805 (2017)] gives conclusive evidence for a slight preference (0.035 eV) for molecular water and a small activation energy of (0.36 eV) for dissociation, understanding the interface between liquid water and TiO2(110) arises as the next controversial frontier.

  20. Polycyclic Aromatic Hydrocarbons with Aliphatic Sidegroups: Intensity Scaling for the C-H Stretching Modes and Astrophysical Implications

    NASA Astrophysics Data System (ADS)

    Yang, X. J.; Li, Aigen; Glaser, R.; Zhong, J. X.

    2017-03-01

    The so-called unidentified infrared emission (UIE) features at 3.3, 6.2, 7.7, 8.6, and 11.3 μ {{m}} ubiquitously seen in a wide variety of astrophysical regions are generally attributed to polycyclic aromatic hydrocarbon (PAH) molecules. Astronomical PAHs may have an aliphatic component, as revealed by the detection in many UIE sources of the aliphatic C-H stretching feature at 3.4 μ {{m}}. The ratio of the observed intensity of the 3.4 μ {{m}} feature to that of the 3.3 μ {{m}} aromatic C-H feature allows one to estimate the aliphatic fraction of the UIE carriers. This requires knowledge of the intrinsic oscillator strengths of the 3.3 μ {{m}} aromatic C-H stretch ({A}3.3) and the 3.4 μ {{m}} aliphatic C-H stretch ({A}3.4). Lacking experimental data on {A}3.3 and {A}3.4 for the UIE candidate materials, one often has to rely on quantum-chemical computations. Although the second-order Møller-Plesset (MP2) perturbation theory with a large basis set is more accurate than the B3LYP density functional theory, MP2 is computationally very demanding and impractical for large molecules. Based on methylated PAHs, we show here that, by scaling the band strengths computed at an inexpensive level (e.g., B3LYP/6-31G*), we are able to obtain band strengths as accurate as those computed at far more expensive levels (e.g., MP2/6-311+G(3df,3pd)). We calculate the model spectra of methylated PAHs and their cations excited by starlight of different spectral shapes and intensities. We find that {({I}3.4/{I}3.3)}{mod}, the ratio of the model intensity of the 3.4 μ {{m}} feature to that of the 3.3 μ {{m}} feature, is insensitive to the spectral shape and intensity of the exciting starlight. We derive a straightforward relation for determining the aliphatic fraction of the UIE carriers (I.e., the ratio of the number of C atoms in aliphatic units {N}{{C},{ali}} to that in aromatic rings {N}{{C},{aro}}) from the observed band ratios {({I}3.4/{I}3.3)}{obs}: {N}{{C},{ali}}/{N}{{C},{aro}}≈ 0.57× {({I}3.4/{I}3.3)}{obs} for neutrals and {N}{{C},{ali}}/{N}{{C},{aro}}≈ 0.26× {({I}3.4/{I}3.3)}{obs} for cations.

  1. SIMAP--a comprehensive database of pre-calculated protein sequence similarities, domains, annotations and clusters.

    PubMed

    Rattei, Thomas; Tischler, Patrick; Götz, Stefan; Jehl, Marc-André; Hoser, Jonathan; Arnold, Roland; Conesa, Ana; Mewes, Hans-Werner

    2010-01-01

    The prediction of protein function as well as the reconstruction of evolutionary genesis employing sequence comparison at large is still the most powerful tool in sequence analysis. Due to the exponential growth of the number of known protein sequences and the subsequent quadratic growth of the similarity matrix, the computation of the Similarity Matrix of Proteins (SIMAP) becomes a computational intensive task. The SIMAP database provides a comprehensive and up-to-date pre-calculation of the protein sequence similarity matrix, sequence-based features and sequence clusters. As of September 2009, SIMAP covers 48 million proteins and more than 23 million non-redundant sequences. Novel features of SIMAP include the expansion of the sequence space by including databases such as ENSEMBL as well as the integration of metagenomes based on their consistent processing and annotation. Furthermore, protein function predictions by Blast2GO are pre-calculated for all sequences in SIMAP and the data access and query functions have been improved. SIMAP assists biologists to query the up-to-date sequence space systematically and facilitates large-scale downstream projects in computational biology. Access to SIMAP is freely provided through the web portal for individuals (http://mips.gsf.de/simap/) and for programmatic access through DAS (http://webclu.bio.wzw.tum.de/das/) and Web-Service (http://mips.gsf.de/webservices/services/SimapService2.0?wsdl).

  2. Data processing techniques used with MST radars: A review

    NASA Technical Reports Server (NTRS)

    Rastogi, P. K.

    1983-01-01

    The data processing methods used in high power radar probing of the middle atmosphere are examined. The radar acts as a spatial filter on the small scale refractivity fluctuations in the medium. The characteristics of the received signals are related to the statistical properties of these fluctuations. A functional outline of the components of a radar system is given. Most computation intensive tasks are carried out by the processor. The processor computes a statistical function of the received signals, simultaneously for a large number of ranges. The slow fading of atmospheric signals is used to reduce the data input rate to the processor by coherent integration. The inherent range resolution of the radar experiments can be improved significant with the use of pseudonoise phase codes to modulate the transmitted pulses and a corresponding decoding operation on the received signals. Commutability of the decoding and coherent integration operations is used to obtain a significant reduction in computations. The limitations of the processors are outlined. At the next level of data reduction, the measured function is parameterized by a few spectral moments that can be related to physical processes in the medium. The problems encountered in estimating the spectral moments in the presence of strong ground clutter, external interference, and noise are discussed. The graphical and statistical analysis of the inferred parameters are outlined. The requirements for special purpose processors for MST radars are discussed.

  3. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  4. Radiative transfer in multilayered random medium with laminar structure - Green's function approach

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.

    1986-01-01

    For a multilayered random medium with a laminar structure a Green's function approach is introduced to obtain the emitted intensity due to an arbitrary point source. It is then shown that the approach is applicable to both active and passive remote sensing. In active remote sensing, the computed radar backscattering cross section for the multilayered medium includes the effects of both volume multiple scattering and surface multiple scattering at the layer boundaries. In passive remote sensing, the brightness temperature is obtained for arbitrary temperature profiles in the layers. As an illustration the brightness temperature and reflectivity are calculated for a bounded layer and compared with results in the literature.

  5. Exploring function activated chlorins using MCD spectroscopy and DFT methods: design of a chlorin with a remarkably intense, red Q band.

    PubMed

    Zhang, Angel; Stillman, Martin J

    2018-05-09

    The electronic structures of three previously synthesized Ni-coordinated chlorins with β-substituents of thioketone, fluorene, and ketone were investigated using magnetic circular dichroism spectroscopy (MCD) and density functional theory (DFT) for potential application as sensitizers for dye-sensitized solar cells (DSSCs). Computational studies on modeled Zn-coordinated chlorins allowed identification of charge transfer and d-d transitions of the Ni2+ coordinated chlorins. Two fictive Zn chlorins, M1 and M2, were designed with thiophene units based on the fluorene substituted chlorin. Substitution with thiophene altered the typical arrangement of the four Gouterman molecular orbitals (MOs) and red-shifted and greatly intensified the lowest energy absorption band (the Q band). The introduction of the thiophene-based MO as the LUMO below the usual Gouterman LUMO is predicted to increase the efficiency of electron transfer from the dye to the conduction band of the semiconductor in DSSCs. The addition of a donor group on the opposite pyrrole (M2) red-shifted the Q band further and introduced a donor-based MO between the typical Gouterman HOMO and HOMO-1. Despite the relatively small ΔHOMO, M1 and M2 exhibited remarkably intense Q bands. M2 would be a possible candidate for application in DSSCs due to its panchromatic absorption, intense and red-shifted Q band, and the presence of the substituent based MO properties. Another indicator of a successful dye is the alignment of the ground state and excited state oxidation potentials (GSOP and ESOP, respectively) with respect to the conduction band of the semiconductor. The GSOP for M2 lies 0.55 eV below the I-/I3- redox potential and the ESOP lies 0.48 eV above the TiO2 conduction band. The impact of the thiophene dominance in the LUMO also supports the prediction of efficient sensitization properties. The remarkably intense Q band of M2 predicted to be at 777 nm with a ΔHOMO of just 1.04 eV provides a synthetic route to tetrapyrroles with extremely intense, red Q bands without the need for aza nitrogens of the phthalocyanines. This study illustrates the value of guided synthesis using MCD spectral analysis and computational methods for optimizing the design of porphyrin dyes.

  6. Neuroimaging with functional near infrared spectroscopy: From formation to interpretation

    NASA Astrophysics Data System (ADS)

    Herrera-Vega, Javier; Treviño-Palacios, Carlos G.; Orihuela-Espina, Felipe

    2017-09-01

    Functional Near Infrared Spectroscopy (fNIRS) is gaining momentum as a functional neuroimaging modality to investigate the cerebral hemodynamics subsequent to neural metabolism. As other neuroimaging modalities, it is neuroscience's tool to understand brain systems functions at behaviour and cognitive levels. To extract useful knowledge from functional neuroimages it is critical to understand the series of transformations applied during the process of the information retrieval and how they bound the interpretation. This process starts with the irradiation of the head tissues with infrared light to obtain the raw neuroimage and proceeds with computational and statistical analysis revealing hidden associations between pixels intensities and neural activity encoded to end up with the explanation of some particular aspect regarding brain function.To comprehend the overall process involved in fNIRS there is extensive literature addressing each individual step separately. This paper overviews the complete transformation sequence through image formation, reconstruction and analysis to provide an insight of the final functional interpretation.

  7. Inter-method reliability of paper surveys and computer assisted telephone interviews in a randomized controlled trial of yoga for low back pain

    PubMed Central

    2014-01-01

    Background Little is known about the reliability of different methods of survey administration in low back pain trials. This analysis was designed to determine the reliability of responses to self-administered paper surveys compared to computer assisted telephone interviews (CATI) for the primary outcomes of pain intensity and back-related function, and secondary outcomes of patient satisfaction, SF-36, and global improvement among participants enrolled in a study of yoga for chronic low back pain. Results Pain intensity, back-related function, and both physical and mental health components of the SF-36 showed excellent reliability at all three time points; ICC scores ranged from 0.82 to 0.98. Pain medication use showed good reliability; kappa statistics ranged from 0.68 to 0.78. Patient satisfaction had moderate to excellent reliability; ICC scores ranged from 0.40 to 0.86. Global improvement showed poor reliability at 6 weeks (ICC = 0.24) and 12 weeks (ICC = 0.10). Conclusion CATI shows excellent reliability for primary outcomes and at least some secondary outcomes when compared to self-administered paper surveys in a low back pain yoga trial. Having two reliable options for data collection may be helpful to increase response rates for core outcomes in back pain trials. Trial registration ClinicalTrials.gov: NCT01761617. Date of trial registration: December 4, 2012. PMID:24716775

  8. Quantification of pulmonary vessel diameter in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate

    2015-03-01

    Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.

  9. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  10. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  11. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  12. The large-scale structure of software-intensive systems

    PubMed Central

    Booch, Grady

    2012-01-01

    The computer metaphor is dominant in most discussions of neuroscience, but the semantics attached to that metaphor are often quite naive. Herein, we examine the ontology of software-intensive systems, the nature of their structure and the application of the computer metaphor to the metaphysical questions of self and causation. PMID:23386964

  13. Comment on the paper ;NDSD-1000: High-resolution, high-temperature nitrogen dioxide spectroscopic Databank; by A.A. Lukashevskaya, N.N. Lavrentieva, A.C. Dudaryonok, V.I. Perevalov, J Quant Spectrosc Radiat Transfer 2016;184:205-17

    NASA Astrophysics Data System (ADS)

    Perrin, A.; Ndao, M.; Manceron, L.

    2017-10-01

    A recent paper [1] presents a high-resolution, high-temperature version of the Nitrogen Dioxide Spectroscopic Databank called NDSD-1000. The NDSD-1000 database contains line parameters (positions, intensities, self- and air-broadening coefficients, exponents of the temperature dependence of self- and air-broadening coefficients) for numerous cold and hot bands of the 14N16O2 isotopomer of nitrogen dioxide. The parameters used for the line positions and intensities calculation were generated through a global modeling of experimental data collected in the literature within the framework of the method of effective operators. However, the form of the effective dipole moment operator used to compute the NO2 line intensities in the NDSD-1000 database differs from the classical one used for line intensities calculation in the NO2 infrared literature [12]. Using Fourier transform spectra recorded at high resolution in the 6.3 μm region, it is shown here, that the NDSD-1000 formulation is incorrect since the computed intensities do not account properly for the (Int(+)/Int(-)) intensity ratio between the (+) (J = N+ 1/2) and (-) (J = N-1/2) electron - spin rotation subcomponents of the computed vibration rotation transitions. On the other hand, in the HITRAN or GEISA spectroscopic databases, the NO2 line intensities were computed using the classical theoretical approach, and it is shown here that these data lead to a significant better agreement between the observed and calculated spectra.

  14. Angle-dependent strong-field molecular ionization rates with tuned range-separated time-dependent density functional theory.

    PubMed

    Sissay, Adonay; Abanador, Paul; Mauger, François; Gaarde, Mette; Schafer, Kenneth J; Lopata, Kenneth

    2016-09-07

    Strong-field ionization and the resulting electronic dynamics are important for a range of processes such as high harmonic generation, photodamage, charge resonance enhanced ionization, and ionization-triggered charge migration. Modeling ionization dynamics in molecular systems from first-principles can be challenging due to the large spatial extent of the wavefunction which stresses the accuracy of basis sets, and the intense fields which require non-perturbative time-dependent electronic structure methods. In this paper, we develop a time-dependent density functional theory approach which uses a Gaussian-type orbital (GTO) basis set to capture strong-field ionization rates and dynamics in atoms and small molecules. This involves propagating the electronic density matrix in time with a time-dependent laser potential and a spatial non-Hermitian complex absorbing potential which is projected onto an atom-centered basis set to remove ionized charge from the simulation. For the density functional theory (DFT) functional we use a tuned range-separated functional LC-PBE*, which has the correct asymptotic 1/r form of the potential and a reduced delocalization error compared to traditional DFT functionals. Ionization rates are computed for hydrogen, molecular nitrogen, and iodoacetylene under various field frequencies, intensities, and polarizations (angle-dependent ionization), and the results are shown to quantitatively agree with time-dependent Schrödinger equation and strong-field approximation calculations. This tuned DFT with GTO method opens the door to predictive all-electron time-dependent density functional theory simulations of ionization and ionization-triggered dynamics in molecular systems using tuned range-separated hybrid functionals.

  15. Angle-dependent strong-field molecular ionization rates with tuned range-separated time-dependent density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sissay, Adonay; Abanador, Paul; Mauger, François

    2016-09-07

    Strong-field ionization and the resulting electronic dynamics are important for a range of processes such as high harmonic generation, photodamage, charge resonance enhanced ionization, and ionization-triggered charge migration. Modeling ionization dynamics in molecular systems from first-principles can be challenging due to the large spatial extent of the wavefunction which stresses the accuracy of basis sets, and the intense fields which require non-perturbative time-dependent electronic structure methods. In this paper, we develop a time-dependent density functional theory approach which uses a Gaussian-type orbital (GTO) basis set to capture strong-field ionization rates and dynamics in atoms and small molecules. This involves propagatingmore » the electronic density matrix in time with a time-dependent laser potential and a spatial non-Hermitian complex absorbing potential which is projected onto an atom-centered basis set to remove ionized charge from the simulation. For the density functional theory (DFT) functional we use a tuned range-separated functional LC-PBE*, which has the correct asymptotic 1/r form of the potential and a reduced delocalization error compared to traditional DFT functionals. Ionization rates are computed for hydrogen, molecular nitrogen, and iodoacetylene under various field frequencies, intensities, and polarizations (angle-dependent ionization), and the results are shown to quantitatively agree with time-dependent Schrödinger equation and strong-field approximation calculations. This tuned DFT with GTO method opens the door to predictive all-electron time-dependent density functional theory simulations of ionization and ionization-triggered dynamics in molecular systems using tuned range-separated hybrid functionals.« less

  16. Direct Numerical Simulation of Automobile Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kurbatskii, Konstantin; Tam, Christopher K. W.

    2000-01-01

    The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.

  17. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  18. Quantitative ROESY analysis of computational models: structural studies of citalopram and β-cyclodextrin complexes by (1) H-NMR and computational methods.

    PubMed

    Ali, Syed Mashhood; Shamim, Shazia

    2015-07-01

    Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.

  19. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  20. Simulating Quantile Models with Applications to Economics and Management

    NASA Astrophysics Data System (ADS)

    Machado, José A. F.

    2010-05-01

    The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.

  1. 3D robust Chan-Vese model for industrial computed tomography volume data segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Linghui; Zeng, Li; Luan, Xiao

    2013-11-01

    Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.

  2. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  3. IVF: exploiting intensity variation function for high-performance pedestrian tracking in forward-looking infrared imagery

    NASA Astrophysics Data System (ADS)

    Lamberti, Fabrizio; Sanna, Andrea; Paravati, Gianluca; Belluccini, Luca

    2014-02-01

    Tracking pedestrian targets in forward-looking infrared video sequences is a crucial component of a growing number of applications. At the same time, it is particularly challenging, since image resolution and signal-to-noise ratio are generally very low, while the nonrigidity of the human body produces highly variable target shapes. Moreover, motion can be quite chaotic with frequent target-to-target and target-to-scene occlusions. Hence, the trend is to design ever more sophisticated techniques, able to ensure rather accurate tracking results at the cost of a generally higher complexity. However, many of such techniques might not be suitable for real-time tracking in limited-resource environments. This work presents a technique that extends an extremely computationally efficient tracking method based on target intensity variation and template matching originally designed for targets with a marked and stable hot spot by adapting it to deal with much more complex thermal signatures and by removing the native dependency on configuration choices. Experimental tests demonstrated that, by working on multiple hot spots, the designed technique is able to achieve the robustness of other common approaches by limiting drifts and preserving the low-computational footprint of the reference method.

  4. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity.

    PubMed

    Madeleine, Pascal; Vangsgaard, Steffen; Hviid Andersen, Johan; Ge, Hong-You; Arendt-Nielsen, Lars

    2013-08-01

    Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD).The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users.

  5. Stochastic resonance in a piecewise nonlinear model driven by multiplicative non-Gaussian noise and additive white noise

    NASA Astrophysics Data System (ADS)

    Guo, Yongfeng; Shen, Yajun; Tan, Jianguo

    2016-09-01

    The phenomenon of stochastic resonance (SR) in a piecewise nonlinear model driven by a periodic signal and correlated noises for the cases of a multiplicative non-Gaussian noise and an additive Gaussian white noise is investigated. Applying the path integral approach, the unified colored noise approximation and the two-state model theory, the analytical expression of the signal-to-noise ratio (SNR) is derived. It is found that conventional stochastic resonance exists in this system. From numerical computations we obtain that: (i) As a function of the non-Gaussian noise intensity, the SNR is increased when the non-Gaussian noise deviation parameter q is increased. (ii) As a function of the Gaussian noise intensity, the SNR is decreased when q is increased. This demonstrates that the effect of the non-Gaussian noise on SNR is different from that of the Gaussian noise in this system. Moreover, we further discuss the effect of the correlation time of the non-Gaussian noise, cross-correlation strength, the amplitude and frequency of the periodic signal on SR.

  6. Comparison of optimization strategy and similarity metric in atlas-to-subject registration using statistical deformation model

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Grupp, R. B.; Sato, Y.; Taylor, R. H.; Armand, M.

    2015-03-01

    A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient's intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.

  7. Computation of the intensities of parametric holographic scattering patterns in photorefractive crystals.

    PubMed

    Schwalenberg, Simon

    2005-06-01

    The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.

  8. Constraint methods that accelerate free-energy simulations of biomolecules.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  9. Recent developments in structural proteomics for protein structure determination.

    PubMed

    Liu, Hsuan-Liang; Hsu, Jyh-Ping

    2005-05-01

    The major challenges in structural proteomics include identifying all the proteins on the genome-wide scale, determining their structure-function relationships, and outlining the precise three-dimensional structures of the proteins. Protein structures are typically determined by experimental approaches such as X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. However, the knowledge of three-dimensional space by these techniques is still limited. Thus, computational methods such as comparative and de novo approaches and molecular dynamic simulations are intensively used as alternative tools to predict the three-dimensional structures and dynamic behavior of proteins. This review summarizes recent developments in structural proteomics for protein structure determination; including instrumental methods such as X-ray crystallography and NMR spectroscopy, and computational methods such as comparative and de novo structure prediction and molecular dynamics simulations.

  10. GeoBrain Computational Cyber-laboratory for Earth Science Studies

    NASA Astrophysics Data System (ADS)

    Deng, M.; di, L.

    2009-12-01

    Computational approaches (e.g., computer-based data visualization, analysis and modeling) are critical for conducting increasingly data-intensive Earth science (ES) studies to understand functions and changes of the Earth system. However, currently Earth scientists, educators, and students have met two major barriers that prevent them from being effectively using computational approaches in their learning, research and application activities. The two barriers are: 1) difficulties in finding, obtaining, and using multi-source ES data; and 2) lack of analytic functions and computing resources (e.g., analysis software, computing models, and high performance computing systems) to analyze the data. Taking advantages of recent advances in cyberinfrastructure, Web service, and geospatial interoperability technologies, GeoBrain, a project funded by NASA, has developed a prototype computational cyber-laboratory to effectively remove the two barriers. The cyber-laboratory makes ES data and computational resources at large organizations in distributed locations available to and easily usable by the Earth science community through 1) enabling seamless discovery, access and retrieval of distributed data, 2) federating and enhancing data discovery with a catalogue federation service and a semantically-augmented catalogue service, 3) customizing data access and retrieval at user request with interoperable, personalized, and on-demand data access and services, 4) automating or semi-automating multi-source geospatial data integration, 5) developing a large number of analytic functions as value-added, interoperable, and dynamically chainable geospatial Web services and deploying them in high-performance computing facilities, 6) enabling the online geospatial process modeling and execution, and 7) building a user-friendly extensible web portal for users to access the cyber-laboratory resources. Users can interactively discover the needed data and perform on-demand data analysis and modeling through the web portal. The GeoBrain cyber-laboratory provides solutions to meet common needs of ES research and education, such as, distributed data access and analysis services, easy access to and use of ES data, and enhanced geoprocessing and geospatial modeling capability. It greatly facilitates ES research, education, and applications. The development of the cyber-laboratory provides insights, lessons-learned, and technology readiness to build more capable computing infrastructure for ES studies, which can meet wide-range needs of current and future generations of scientists, researchers, educators, and students for their formal or informal educational training, research projects, career development, and lifelong learning.

  11. Rapid Chemometric Filtering of Spectral Data

    NASA Technical Reports Server (NTRS)

    Beaman, Gregory; Pelletier, Michael; Seshadri, Suresh

    2004-01-01

    A method of rapid, programmable filtering of spectral transmittance, reflectance, or fluorescence data to measure the concentrations of chemical species has been proposed. By programmable is meant that a variety of spectral analyses can readily be performed and modified in software, firmware, and/or electronic hardware, without need to change optical filters or other optical hardware of the associated spectrometers. The method is intended to enable real-time identification of single or multiple target chemical species in applications that involve high-throughput screening of multiple samples. Examples of such applications include (but are not limited to) combinatorial chemistry, flow cytometry, bead assays, testing drugs, remote sensing, and identification of targets. The basic concept of the proposed method is to perform real-time crosscorrelations of a measured spectrum with one or more analytical function(s) of wavelength that could be, for example, the known spectra of target species. Assuming that measured spectral intensities are proportional to concentrations of target species plus background spectral intensities, then after subtraction of background levels, it should be possible to determine target species concentrations from cross-correlation values. Of course, the problem of determining the concentrations is more complex when spectra of different species overlap, but the problem can be solved by use of multiple analytical functions in combination with computational techniques that have been developed previously for analyses of this type. The method is applicable to the design and operation of a spectrometer in which spectrally dispersed light is measured by means of an active-pixel sensor (APS) array. The row or column dimension of such an array is generally chosen to be aligned along the spectral-dispersion dimension, so that each pixel intercepts light in a narrow spectral band centered on a wavelength that is a known function of the pixel position. The proposed method admits of two hardware implementations for computing cross-correlations in real time.

  12. Presumed PDF Modeling of Early Flame Propagation in Moderate to Intense Turbulence Environments

    NASA Technical Reports Server (NTRS)

    Carmen, Christina; Feikema, Douglas A.

    2003-01-01

    The present paper describes the results obtained from a one-dimensional time dependent numerical technique that simulates early flame propagation in a moderate to intense turbulent environment. Attention is focused on the development of a spark-ignited, premixed, lean methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. A Monte-Carlo particle tracking method, based upon the method of fractional steps, is utilized to simulate the phenomena represented by a probability density function (PDF) transport equation. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on three primary parameters that influence the initial flame kernel growth: the detailed ignition system characteristics, the mixture composition, and the nature of the flow field. The computational results of moderate and intense isotropic turbulence suggests that flames within the distributed reaction zone are not as vulnerable, as traditionally believed, to the adverse effects of increased turbulence intensity. It is also shown that the magnitude of the flame front thickness significantly impacts the turbulent consumption flame speed. Flame conditions studied have fuel equivalence ratio s in the range phi = 0.6 to 0.9 at standard temperature and pressure.

  13. Observational Signatures of a Kink-unstable Coronal Flux Rope Using Hinode /EIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snow, B.; Botha, G. J. J.; Régnier, S.

    The signatures of energy release and energy transport for a kink-unstable coronal flux rope are investigated via forward modeling. Synthetic intensity and Doppler maps are generated from a 3D numerical simulation. The CHIANTI database is used to compute intensities for three Hinode /EIS emission lines that cover the thermal range of the loop. The intensities and Doppler velocities at simulation-resolution are spatially degraded to the Hinode /EIS pixel size (1″), convolved using a Gaussian point-spread function (3″), and exposed for a characteristic time of 50 s. The synthetic images generated for rasters (moving slit) and sit-and-stare (stationary slit) are analyzedmore » to find the signatures of the twisted flux and the associated instability. We find that there are several qualities of a kink-unstable coronal flux rope that can be detected observationally using Hinode /EIS, namely the growth of the loop radius, the increase in intensity toward the radial edge of the loop, and the Doppler velocity following an internal twisted magnetic field line. However, EIS cannot resolve the small, transient features present in the simulation, such as sites of small-scale reconnection (e.g., nanoflares).« less

  14. Automated Creation of Labeled Pointcloud Datasets in Support of Machine-Learning Based Perception

    DTIC Science & Technology

    2017-12-01

    computationally intensive 3D vector math and took more than ten seconds to segment a single LIDAR frame from the HDL-32e with the Dell XPS15 9650’s Intel...Core i7 CPU. Depth Clustering avoids the computationally intensive 3D vector math of Euclidean Clustering-based DON segmentation and, instead

  15. PNNL Data-Intensive Computing for a Smarter Energy Grid

    ScienceCinema

    Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.

  16. Understanding light scattering by a coated sphere part 2: time domain analysis.

    PubMed

    Laven, Philip; Lock, James A

    2012-08-01

    Numerical computations were made of scattering of an incident electromagnetic pulse by a coated sphere that is large compared to the dominant wavelength of the incident light. The scattered intensity was plotted as a function of the scattering angle and delay time of the scattered pulse. For fixed core and coating radii, the Debye series terms that most strongly contribute to the scattered intensity in different regions of scattering angle-delay time space were identified and analyzed. For a fixed overall radius and an increasing core radius, the first-order rainbow was observed to evolve into three separate components. The original component faded away, while the two new components eventually merged together. The behavior of surface waves generated by grazing incidence at the core/coating and coating/exterior interfaces was also examined and discussed.

  17. Structure, Elastic Constants and XRD Spectra of Extended Solids under High Pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batyrev, I. G.; Coleman, S. P.; Ciezak-Jenkins, J. A.

    We present results of evolutionary simulations based on density functional calculations of a potentially new type of energetic materials called extended solids: P-N and N-H. High-density structures with covalent bonds generated using variable and fixed concentration methods were analysed in terms of thermo-dynamical stability and agreement with experimental X-ray diffraction (XRD) spectra. X-ray diffraction spectra were calculated using a virtual diffraction algorithm that computes kinematic diffraction intensity in three-dimensional reciprocal space before being reduced to a two-theta line profile. Calculated XRD patterns were used to search for the structure of extended solids present at experimental pressures by optimizing data accordingmore » to experimental XRD peak position, peak intensity and theoretically calculated enthalpy. Elastic constants has been calculated for thermodynamically stable structures of P-N system.« less

  18. Parallel algorithm of VLBI software correlator under multiprocessor environment

    NASA Astrophysics Data System (ADS)

    Zheng, Weimin; Zhang, Dong

    2007-11-01

    The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.

  19. VLSI neuroprocessors

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.

    1994-01-01

    Electronic and optoelectronic hardware implementations of highly parallel computing architectures address several ill-defined and/or computation-intensive problems not easily solved by conventional computing techniques. The concurrent processing architectures developed are derived from a variety of advanced computing paradigms including neural network models, fuzzy logic, and cellular automata. Hardware implementation technologies range from state-of-the-art digital/analog custom-VLSI to advanced optoelectronic devices such as computer-generated holograms and e-beam fabricated Dammann gratings. JPL's concurrent processing devices group has developed a broad technology base in hardware implementable parallel algorithms, low-power and high-speed VLSI designs and building block VLSI chips, leading to application-specific high-performance embeddable processors. Application areas include high throughput map-data classification using feedforward neural networks, terrain based tactical movement planner using cellular automata, resource optimization (weapon-target assignment) using a multidimensional feedback network with lateral inhibition, and classification of rocks using an inner-product scheme on thematic mapper data. In addition to addressing specific functional needs of DOD and NASA, the JPL-developed concurrent processing device technology is also being customized for a variety of commercial applications (in collaboration with industrial partners), and is being transferred to U.S. industries. This viewgraph p resentation focuses on two application-specific processors which solve the computation intensive tasks of resource allocation (weapon-target assignment) and terrain based tactical movement planning using two extremely different topologies. Resource allocation is implemented as an asynchronous analog competitive assignment architecture inspired by the Hopfield network. Hardware realization leads to a two to four order of magnitude speed-up over conventional techniques and enables multiple assignments, (many to many), not achievable with standard statistical approaches. Tactical movement planning (finding the best path from A to B) is accomplished with a digital two-dimensional concurrent processor array. By exploiting the natural parallel decomposition of the problem in silicon, a four order of magnitude speed-up over optimized software approaches has been demonstrated.

  20. Draco,Version 6.x.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Kelly; Budge, Kent; Lowrie, Rob

    2016-03-03

    Draco is an object-oriented component library geared towards numerically intensive, radiation (particle) transport applications built for parallel computing hardware. It consists of semi-independent packages and a robust build system. The packages in Draco provide a set of components that can be used by multiple clients to build transport codes. The build system can also be extracted for use in clients. Software includes smart pointers, Design-by-Contract assertions, unit test framework, wrapped MPI functions, a file parser, unstructured mesh data structures, a random number generator, root finders and an angular quadrature component.

  1. Computational Identification and Analysis of Signaling Subnetworks with Distinct Functional Roles in the Regulation of TNF Production

    DTIC Science & Technology

    2016-01-04

    extract four distinct quantitative features of response timing and intensity: the trajectory peak height, the peak time, the area under the curve, and...J., 2007, 21, 325–332. 16 S. L. Werner, J. D. Kearns, V. Zadorozhnaya, C. Lynch, E. O’Dea, M. P. Boldin , A. Ma, D. Baltimore and A. Hoffmann, Genes...Werner, J. D. Kearns, V. Zadorozhnaya, C. Lynch, E. O’Dea, M. P. Boldin , A. Ma, D. Baltimore and A. Hoffmann, Genes Dev., 2008, 22, 2093–2101. 58 J

  2. Advanced ballistic range technology

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1993-01-01

    Experimental interferograms, schlieren, and shadowgraphs are used for quantitative and qualitative flow-field studies. These images are created by passing light through a flow field, and the recorded intensity patterns are functions of the phase shift and angular deflection of the light. As part of the grant NCC2-583, techniques and software have been developed for obtaining phase shifts from finite-fringe interferograms and for constructing optical images from Computational Fluid Dynamics (CFD) solutions. During the period from 1 Nov. 1992 - 30 Jun. 1993, research efforts have been concentrated in improving these techniques.

  3. A mixed-order nonlinear diffusion compressed sensing MR image reconstruction.

    PubMed

    Joy, Ajin; Paul, Joseph Suresh

    2018-03-07

    Avoid formation of staircase artifacts in nonlinear diffusion-based MR image reconstruction without compromising computational speed. Whereas second-order diffusion encourages the evolution of pixel neighborhood with uniform intensities, fourth-order diffusion considers smooth region to be not necessarily a uniform intensity region but also a planar region. Therefore, a controlled application of fourth-order diffusivity function is used to encourage second-order diffusion to reconstruct the smooth regions of the image as a plane rather than a group of blocks, while not being strong enough to introduce the undesirable speckle effect. Proposed method is compared with second- and fourth-order nonlinear diffusion reconstruction, total variation (TV), total generalized variation, and higher degree TV using in vivo data sets for different undersampling levels with application to dictionary learning-based reconstruction. It is observed that the proposed technique preserves sharp boundaries in the image while preventing the formation of staircase artifacts in the regions of smoothly varying pixel intensities. It also shows reduced error measures compared with second-order nonlinear diffusion reconstruction or TV and converges faster than TV-based methods. Because nonlinear diffusion is known to be an effective alternative to TV for edge-preserving reconstruction, the crucial aspect of staircase artifact removal is addressed. Reconstruction is found to be stable for the experimentally determined range of fourth-order regularization parameter, and therefore not does not introduce a parameter search. Hence, the computational simplicity of second-order diffusion is retained. © 2018 International Society for Magnetic Resonance in Medicine.

  4. Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.

    PubMed

    Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar

    2017-11-03

    Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.

  5. Biased Intensity Judgements of Visceral Sensations After Learning to Fear Visceral Stimuli: A Drift Diffusion Approach.

    PubMed

    Zaman, Jonas; Madden, Victoria J; Iven, Julie; Wiech, Katja; Weltens, Nathalie; Ly, Huynh Giao; Vlaeyen, Johan W S; Van Oudenhove, Lukas; Van Diest, Ilse

    2017-10-01

    A growing body of research has identified fear of visceral sensations as a potential mechanism in the development and maintenance of visceral pain disorders. However, the extent to which such learned fear affects visceroception remains unclear. To address this question, we used a differential fear conditioning paradigm with nonpainful esophageal balloon distensions of 2 different intensities as conditioning stimuli (CSs). The experiment comprised of preacquisition, acquisition, and postacquisition phases during which participants categorized the CSs with respect to their intensity. The CS+ was always followed by a painful electrical stimulus (unconditioned stimulus) during the acquisition phase and in 60% of the trials during postacquisition. The second stimulus (CS-) was never associated with pain. Analyses of galvanic skin and startle eyeblink responses as physiological markers of successful conditioning showed increased fear responses to the CS+ compared with the CS-, but only in the group with the low-intensity stimulus as CS+. Computational modeling of response times and response accuracies revealed that differential fear learning affected perceptual decision-making about the intensities of visceral sensations such that sensations were more likely to be categorized as more intense. These results suggest that associative learning might indeed contribute to visceral hypersensitivity in functional gastrointestinal disorders. This study shows that associative fear learning biases intensity judgements of visceral sensations toward perceiving such sensations as more intense. Learning-induced alterations in visceroception might therefore contribute to the development or maintenance of visceral pain. Copyright © 2017 American Pain Society. Published by Elsevier Inc. All rights reserved.

  6. Computer-based psychological treatment for comorbid depression and problematic alcohol and/or cannabis use: a randomized controlled trial of clinical efficacy.

    PubMed

    Kay-Lambkin, Frances J; Baker, Amanda L; Lewin, Terry J; Carr, Vaughan J

    2009-03-01

    To evaluate computer- versus therapist-delivered psychological treatment for people with comorbid depression and alcohol/cannabis use problems. Randomized controlled trial. Community-based participants in the Hunter Region of New South Wales, Australia. Ninety-seven people with comorbid major depression and alcohol/cannabis misuse. All participants received a brief intervention (BI) for depressive symptoms and substance misuse, followed by random assignment to: no further treatment (BI alone); or nine sessions of motivational interviewing and cognitive behaviour therapy (intensive MI/CBT). Participants allocated to the intensive MI/CBT condition were selected at random to receive their treatment 'live' (i.e. delivered by a psychologist) or via a computer-based program (with brief weekly input from a psychologist). Depression, alcohol/cannabis use and hazardous substance use index scores measured at baseline, and 3, 6 and 12 months post-baseline assessment. (i) Depression responded better to intensive MI/CBT compared to BI alone, with 'live' treatment demonstrating a strong short-term beneficial effect which was matched by computer-based treatment at 12-month follow-up; (ii) problematic alcohol use responded well to BI alone and even better to the intensive MI/CBT intervention; (iii) intensive MI/CBT was significantly better than BI alone in reducing cannabis use and hazardous substance use, with computer-based therapy showing the largest treatment effect. Computer-based treatment, targeting both depression and substance use simultaneously, results in at least equivalent 12-month outcomes relative to a 'live' intervention. For clinicians treating people with comorbid depression and alcohol problems, BIs addressing both issues appear to be an appropriate and efficacious treatment option. Primary care of those with comorbid depression and cannabis use problems could involve computer-based integrated interventions for depression and cannabis use, with brief regular contact with the clinician to check on progress.

  7. Measurement of turbulent spatial structure and kinetic energy spectrum by exact temporal-to-spatial mapping

    NASA Astrophysics Data System (ADS)

    Buchhave, Preben; Velte, Clara M.

    2017-08-01

    We present a method for converting a time record of turbulent velocity measured at a point in a flow to a spatial velocity record consisting of consecutive convection elements. The spatial record allows computation of dynamic statistical moments such as turbulent kinetic wavenumber spectra and spatial structure functions in a way that completely bypasses the need for Taylor's hypothesis. The spatial statistics agree with the classical counterparts, such as the total kinetic energy spectrum, at least for spatial extents up to the Taylor microscale. The requirements for applying the method are access to the instantaneous velocity magnitude, in addition to the desired flow quantity, and a high temporal resolution in comparison to the relevant time scales of the flow. We map, without distortion and bias, notoriously difficult developing turbulent high intensity flows using three main aspects that distinguish these measurements from previous work in the field: (1) The measurements are conducted using laser Doppler anemometry and are therefore not contaminated by directional ambiguity (in contrast to, e.g., frequently employed hot-wire anemometers); (2) the measurement data are extracted using a correctly and transparently functioning processor and are analysed using methods derived from first principles to provide unbiased estimates of the velocity statistics; (3) the exact mapping proposed herein has been applied to the high turbulence intensity flows investigated to avoid the significant distortions caused by Taylor's hypothesis. The method is first confirmed to produce the correct statistics using computer simulations and later applied to measurements in some of the most difficult regions of a round turbulent jet—the non-equilibrium developing region and the outermost parts of the developed jet. The proposed mapping is successfully validated using corresponding directly measured spatial statistics in the fully developed jet, even in the difficult outer regions of the jet where the average convection velocity is negligible and turbulence intensities increase dramatically. The measurements in the developing region reveal interesting features of an incomplete Richardson-Kolmogorov cascade under development.

  8. A computationally efficient method for incorporating spike waveform information into decoding algorithms.

    PubMed

    Ventura, Valérie; Todorova, Sonia

    2015-05-01

    Spike-based brain-computer interfaces (BCIs) have the potential to restore motor ability to people with paralysis and amputation, and have shown impressive performance in the lab. To transition BCI devices from the lab to the clinic, decoding must proceed automatically and in real time, which prohibits the use of algorithms that are computationally intensive or require manual tweaking. A common choice is to avoid spike sorting and treat the signal on each electrode as if it came from a single neuron, which is fast, easy, and therefore desirable for clinical use. But this approach ignores the kinematic information provided by individual neurons recorded on the same electrode. The contribution of this letter is a linear decoding model that extracts kinematic information from individual neurons without spike-sorting the electrode signals. The method relies on modeling sample averages of waveform features as functions of kinematics, which is automatic and requires minimal data storage and computation. In offline reconstruction of arm trajectories of a nonhuman primate performing reaching tasks, the proposed method performs as well as decoders based on expertly manually and automatically sorted spikes.

  9. The role of fault surface geometry in the evolution of the fault deformation zone: comparing modeling with field example from the Vignanotica normal fault (Gargano, Southern Italy).

    NASA Astrophysics Data System (ADS)

    Maggi, Matteo; Cianfarra, Paola; Salvini, Francesco

    2013-04-01

    Faults have a (brittle) deformation zone that can be described as the presence of two distintive zones: an internal Fault core (FC) and an external Fault Damage Zone (FDZ). The FC is characterized by grinding processes that comminute the rock grains to a final grain-size distribution characterized by the prevalence of smaller grains over larger, represented by high fractal dimensions (up to 3.4). On the other hand, the FDZ is characterized by a network of fracture sets with characteristic attitudes (i.e. Riedel cleavages). This deformation pattern has important consequences on rock permeability. FC often represents hydraulic barriers, while FDZ, with its fracture connection, represents zones of higher permability. The observation of faults revealed that dimension and characteristics of FC and FDZ varies both in intensity and dimensions along them. One of the controlling factor in FC and FDZ development is the fault plane geometry. By changing its attitude, fault plane geometry locally alter the stress component produced by the fault kinematics and its combination with the bulk boundary conditions (regional stress field, fluid pressure, rocks rheology) is responsible for the development of zones of higher and lower fracture intensity with variable extension along the fault planes. Furthermore, the displacement along faults provides a cumulative deformation pattern that varies through time. The modeling of the fault evolution through time (4D modeling) is therefore required to fully describe the fracturing and therefore permeability. In this presentation we show a methodology developed to predict distribution of fracture intensity integrating seismic data and numerical modeling. Fault geometry is carefully reconstructed by interpolating stick lines from interpreted seismic sections converted to depth. The modeling is based on a mixed numerical/analytical method. Fault surface is discretized into cells with their geometric and rheological characteristics. For each cell, the acting stress and strength are computed by analytical laws (Coulomb failure). Total brittle deformation for each cell is then computed by cumulating the brittle failure values along the path of each cell belonging to one side onto the facing one. The brittle failure value is provided by the DF function, that is the difference between the computed shear and the strength of the cell at each step along its path by using the Frap in-house developed software. The width of the FC and the FDZ are computed as a function of the DF distribution and displacement around the fault. This methodology has been successfully applied to model the brittle deformation pattern of the Vignanotica normal fault (Gargano, Southern Italy) where fracture intensity is expressed by the dimensionless H/S ratio representing the ratio between the dimension and the spacing of homologous fracture sets (i.e., group of parallel fractures that can be ascribed to the same event/stage/stress field).

  10. Genes Responsive to Low-Intensity Pulsed Ultrasound in MC3T3-E1 Preosteoblast Cells

    PubMed Central

    Tabuchi, Yoshiaki; Sugahara, Yuuki; Ikegame, Mika; Suzuki, Nobuo; Kitamura, Kei-ichiro; Kondo, Takashi

    2013-01-01

    Although low-intensity pulsed ultrasound (LIPUS) has been shown to enhance bone fracture healing, the underlying mechanism of LIPUS remains to be fully elucidated. Here, to better understand the molecular mechanism underlying cellular responses to LIPUS, we investigated gene expression profiles in mouse MC3T3-E1 preosteoblast cells exposed to LIPUS using high-density oligonucleotide microarrays and computational gene expression analysis tools. Although treatment of the cells with a single 20-min LIPUS (1.5 MHz, 30 mW/cm2) did not affect the cell growth or alkaline phosphatase activity, the treatment significantly increased the mRNA level of Bglap. Microarray analysis demonstrated that 38 genes were upregulated and 37 genes were downregulated by 1.5-fold or more in the cells at 24-h post-treatment. Ingenuity pathway analysis demonstrated that the gene network U (up) contained many upregulated genes that were mainly associated with bone morphology in the category of biological functions of skeletal and muscular system development and function. Moreover, the biological function of the gene network D (down), which contained downregulated genes, was associated with gene expression, the cell cycle and connective tissue development and function. These results should help to further clarify the molecular basis of the mechanisms of the LIPUS response in osteoblast cells. PMID:24252911

  11. A nonlinear filter-bank model of the guinea-pig cochlear nerve: Rate responses

    NASA Astrophysics Data System (ADS)

    Sumner, Christian J.; O'Mard, Lowel P.; Lopez-Poveda, Enrique A.; Meddis, Ray

    2003-06-01

    The aim of this study is to produce a functional model of the auditory nerve (AN) response of the guinea-pig that reproduces a wide range of important responses to auditory stimulation. The model is intended for use as an input to larger scale models of auditory processing in the brain-stem. A dual-resonance nonlinear filter architecture is used to reproduce the mechanical tuning of the cochlea. Transduction to the activity on the AN is accomplished with a recently proposed model of the inner-hair-cell. Together, these models have been shown to be able to reproduce the response of high-, medium-, and low-spontaneous rate fibers from the guinea-pig AN at high best frequencies (BFs). In this study we generate parameters that allow us to fit the AN model to data from a wide range of BFs. By varying the characteristics of the mechanical filtering as a function of the BF it was possible to reproduce the BF dependence of frequency-threshold tuning curves, AN rate-intensity functions at and away from BF, compression of the basilar membrane at BF as inferred from AN responses, and AN iso-intensity functions. The model is a convenient computational tool for the simulation of the range of nonlinear tuning and rate-responses found across the length of the guinea-pig cochlear nerve.

  12. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  13. Low Latency Workflow Scheduling and an Application of Hyperspectral Brightness Temperatures

    NASA Astrophysics Data System (ADS)

    Nguyen, P. T.; Chapman, D. R.; Halem, M.

    2012-12-01

    New system analytics for Big Data computing holds the promise of major scientific breakthroughs and discoveries from the exploration and mining of the massive data sets becoming available to the science community. However, such data intensive scientific applications face severe challenges in accessing, managing and analyzing petabytes of data. While the Hadoop MapReduce environment has been successfully applied to data intensive problems arising in business, there are still many scientific problem domains where limitations in the functionality of MapReduce systems prevent its wide adoption by those communities. This is mainly because MapReduce does not readily support the unique science discipline needs such as special science data formats, graphic and computational data analysis tools, maintaining high degrees of computational accuracies, and interfacing with application's existing components across heterogeneous computing processors. We address some of these limitations by exploiting the MapReduce programming model for satellite data intensive scientific problems and address scalability, reliability, scheduling, and data management issues when dealing with climate data records and their complex observational challenges. In addition, we will present techniques to support the unique Earth science discipline needs such as dealing with special science data formats (HDF and NetCDF). We have developed a Hadoop task scheduling algorithm that improves latency by 2x for a scientific workflow including the gridding of the EOS AIRS hyperspectral Brightness Temperatures (BT). This workflow processing algorithm has been tested at the Multicore Computing Center private Hadoop based Intel Nehalem cluster, as well as in a virtual mode under the Open Source Eucalyptus cloud. The 55TB AIRS hyperspectral L1b Brightness Temperature record has been gridded at the resolution of 0.5x1.0 degrees, and we have computed a 0.9 annual anti-correlation to the El Nino Southern oscillation in the Nino 4 region, as well as a 1.9 Kelvin decadal Arctic warming in the 4u and 12u spectral regions. Additionally, we will present the frequency of extreme global warming events by the use of a normalized maximum BT in a grid cell relative to its local standard deviation. A low-latency Hadoop scheduling environment maintains data integrity and fault tolerance in a MapReduce data intensive Cloud environment while improving the "time to solution" metric by 35% when compared to a more traditional parallel processing system for the same dataset. Our next step will be to improve the usability of our Hadoop task scheduling system, to enable rapid prototyping of data intensive experiments by means of processing "kernels". We will report on the performance and experience of implementing these experiments on the NEX testbed, and propose the use of a graphical directed acyclic graph (DAG) interface to help us develop on-demand scientific experiments. Our workflow system works within Hadoop infrastructure as a replacement for the FIFO or FairScheduler, thus the use of Apache "Pig" latin or other Apache tools may also be worth investigating on the NEX system to improve the usability of our workflow scheduling infrastructure for rapid experimentation.

  14. How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing

    NASA Astrophysics Data System (ADS)

    Decyk, V. K.; Dauger, D. E.

    We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.

  15. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  16. Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    NASA Technical Reports Server (NTRS)

    Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.

    1992-01-01

    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.

  17. User's manual for a computer program for simulating intensively managed allowable cut.

    Treesearch

    Robert W. Sassaman; Ed Holt; Karl Bergsvik

    1972-01-01

    Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....

  18. PNNLs Data Intensive Computing research battles Homeland Security threats

    ScienceCinema

    David Thurman; Joe Kielman; Katherine Wolf; David Atkinson

    2018-05-11

    The Pacific Northwest National Laboratorys (PNNL's) approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architecture, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  19. PNNL pushing scientific discovery through data intensive computing breakthroughs

    ScienceCinema

    Deborah Gracio; David Koppenaal; Ruby Leung

    2018-05-18

    The Pacific Northwest National Laboratory's approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architectures, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  20. Self-Administered Cued Naming Therapy: A Single-Participant Investigation of a Computer-Based Therapy Program Replicated in Four Cases

    ERIC Educational Resources Information Center

    Ramsberger, Gail; Marie, Basem

    2007-01-01

    Purpose: This study examined the benefits of a self-administered, clinician-guided, computer-based, cued naming therapy. Results of intense and nonintense treatment schedules were compared. Method: A single-participant design with multiple baselines across behaviors and varied treatment intensity for 2 trained lists was replicated over 4…

  1. Accurate optimization of amino acid form factors for computing small-angle X-ray scattering intensity of atomistic protein structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Dudu; Yang, Sichun; Lu, Lanyuan

    2016-06-20

    Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less

  2. Modulation of the pupil function of microscope objective lens for multifocal multi-photon microscopy using a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Matsumoto, Naoya; Okazaki, Shigetoshi; Takamoto, Hisayoshi; Inoue, Takashi; Terakawa, Susumu

    2014-02-01

    We propose a method for high precision modulation of the pupil function of a microscope objective lens to improve the performance of multifocal multi-photon microscopy (MMM). To modulate the pupil function, we adopt a spatial light modulator (SLM) and place it at the conjugate position of the objective lens. The SLM can generate an arbitrary number of spots to excite the multiple fluorescence spots (MFS) at the desired positions and intensities by applying an appropriate computer-generated hologram (CGH). This flexibility allows us to control the MFS according to the photobleaching level of a fluorescent protein and phototoxicity of a specimen. However, when a large number of excitation spots are generated, the intensity distribution of the MFS is significantly different from the one originally designed due to misalignment of the optical setup and characteristics of the SLM. As a result, the image of a specimen obtained using laser scanning for the MFS has block noise segments because the SLM could not generate a uniform MFS. To improve the intensity distribution of the MFS, we adaptively redesigned the CGH based on the observed MFS. We experimentally demonstrate an improvement in the uniformity of a 10 × 10 MFS grid using a dye solution. The simplicity of the proposed method will allow it to be applied for calibration of MMM before observing living tissue. After the MMM calibration, we performed laser scanning with two-photon excitation to observe a real specimen without detecting block noise segments.

  3. Atomoxetine administration combined with intensive speech therapy for post-stroke aphasia: evaluation by a novel SPECT method.

    PubMed

    Yamada, Naoki; Kakuda, Wataru; Yamamoto, Kazuma; Momosaki, Ryo; Abo, Masahiro

    2016-09-01

    We clarified the safety, feasibility, and efficacy of atomoxetine administration combined with intensive speech therapy (ST) for patients with post-stroke aphasia. In addition, we investigated the effect of atomoxetine treatment on neural activity of surrounding lesioned brain areas. Four adult patients with motor-dominant aphasia and a history of left hemispheric stroke were studied. We have registered on the clinical trials database (ID: JMA-IIA00215). Daily atomoxetine administration of 40 mg was initiated two weeks before admission and raised to 80 mg 1 week before admission. During the subsequent 13-day hospitalization, administration of atomoxetine was raised to 120 mg and daily intensive ST (120 min/day, one-on-one training) was provided. Language function was assessed using the Japanese version of The Western Aphasia Battery (WAB) and the Token test two weeks prior to admission, on the day of admission, and at discharge. At two weeks prior to admission and at discharge, each patient's cortical blood flow was measured using (123)I-IMP-single photon emission computed tomography (SPECT). This protocol was successfully completed by all patients without any adverse effects. Four patients showed improved language function with the median of the Token Test increasing from 141 to 149, and the repetition score of WAB increasing from 88 to 99. In addition, cortical blood flow surrounding lesioned brain areas was found to increase following intervention in all patients. Atomoxetine administration and intensive ST were safe and feasible for post-stroke aphasia, suggesting their potential usefulness in the treatment of this patient population.

  4. White noise analysis of Phycomyces light growth response system. I. Normal intensity range.

    PubMed Central

    Lipson, E D

    1975-01-01

    The Wiener-Lee-Schetzen method for the identification of a nonlinear system through white gaussian noise stimulation was applied to the transient light growth response of the sporangiophore of Phycomyces. In order to cover a moderate dynamic range of light intensity I, the imput variable was defined to be log I. The experiments were performed in the normal range of light intensity, centered about I0 = 10(-6) W/cm2. The kernels of the Wierner functionals were computed up to second order. Within the range of a few decades the system is reasonably linear with log I. The main nonlinear feature of the second-order kernel corresponds to the property of rectification. Power spectral analysis reveals that the slow dynamics of the system are of at least fifth order. The system can be represented approximately by a linear transfer function, including a first-order high-pass (adaptation) filter with a 4 min time constant and an underdamped fourth-order low-pass filter. Accordingly a linear electronic circuit was constructed to simulate the small scale response characteristics. In terms of the adaptation model of Delbrück and Reichardt (1956, in Cellular Mechanisms in Differentiation and Growth, Princeton University Press), kernels were deduced for the dynamic dependence of the growth velocity (output) on the "subjective intensity", a presumed internal variable. Finally the linear electronic simulator above was generalized to accommodate the large scale nonlinearity of the adaptation model and to serve as a tool for deeper test of the model. PMID:1203444

  5. Ambient noise adjoint tomography for a linear array in North China

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Yao, H.; Liu, Q.; Yuan, Y. O.; Zhang, P.; Feng, J.; Fang, L.

    2017-12-01

    Ambient noise tomography based on dispersion data and ray theory has been widely utilized for imaging crustal structures. In order to improve the inversion accuracy, ambient noise tomography based on the 3D adjoint approach or full waveform inversion has been developed recently, however, the computational cost is tremendous. In this study we present 2D ambient noise adjoint tomography for a linear array in north China with significant computational efficiency compared to 3D ambient noise adjoint tomography. During the preprocessing, we first convert the observed data in 3D media, i.e., surface-wave empirical Green's functions (EGFs) from ambient noise cross-correlation, to the reconstructed EGFs in 2D media using a 3D/2D transformation scheme. Different from the conventional steps of measuring phase dispersion, the 2D adjoint tomography refines 2D shear wave speeds along the profile directly from the reconstructed Rayleigh wave EGFs in the period band 6-35s. With the 2D initial model extracted from the 3D model from traditional ambient noise tomography, adjoint tomography updates the model by minimizing the frequency-dependent Rayleigh wave traveltime misfits between the reconstructed EGFs and synthetic Green function (SGFs) in 2D media generated by the spectral-element method (SEM), with a preconditioned conjugate gradient method. The multitaper traveltime difference measurement is applied in four period bands during the inversion: 20-35s, 15-30s, 10-20s and 6-15s. The recovered model shows more detailed crustal structures with pronounced low velocity anomaly in the mid-lower crust beneath the junction of Taihang Mountains and Yin-Yan Mountains compared with the initial model. This low velocity structure may imply the possible intense crust-mantle interactions, probably associated with the magmatic underplating during the Mesozoic to Cenozoic evolution of the region. To our knowledge, it's first time that ambient noise adjoint tomography is implemented in 2D media. Considering the intensive computational cost and storage of 3D adjoint tomography, this 2D ambient noise adjoint tomography has potential advantages to get high-resolution 2D crustal structures with limited computational resource.

  6. Nonlinear electronic excitations in crystalline solids using meta-generalized gradient approximation and hybrid functional in time-dependent density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sato, Shunsuke A.; Taniguchi, Yasutaka; Department of Medical and General Sciences, Nihon Institute of Medical Science, 1276 Shimogawara, Moroyama-Machi, Iruma-Gun, Saitama 350-0435

    2015-12-14

    We develop methods to calculate electron dynamics in crystalline solids in real-time time-dependent density functional theory employing exchange-correlation potentials which reproduce band gap energies of dielectrics; a meta-generalized gradient approximation was proposed by Tran and Blaha [Phys. Rev. Lett. 102, 226401 (2009)] (TBm-BJ) and a hybrid functional was proposed by Heyd, Scuseria, and Ernzerhof [J. Chem. Phys. 118, 8207 (2003)] (HSE). In time evolution calculations employing the TB-mBJ potential, we have found it necessary to adopt the predictor-corrector step for a stable time evolution. We have developed a method to evaluate electronic excitation energy without referring to the energy functionalmore » which is unknown for the TB-mBJ potential. For the HSE functional, we have developed a method for the operation of the Fock-like term in Fourier space to facilitate efficient use of massive parallel computers equipped with graphic processing units. We compare electronic excitations in silicon and germanium induced by femtosecond laser pulses using the TB-mBJ, HSE, and a simple local density approximation (LDA). At low laser intensities, electronic excitations are found to be sensitive to the band gap energy: they are close to each other using TB-mBJ and HSE and are much smaller in LDA. At high laser intensities close to the damage threshold, electronic excitation energies do not differ much among the three cases.« less

  7. Vibrational analysis and quantum chemical calculations of 2,2‧-bipyridine Zinc(II) halide complexes

    NASA Astrophysics Data System (ADS)

    Ozel, Aysen E.; Kecel, Serda; Akyuz, Sevim

    2007-05-01

    In this study the molecular structure and vibrational spectra of Zn(2,2'-bipyridine)X 2 (X = Cl and Br) complexes were studied in their ground states by computational vibrational study and scaled quantum mechanical (SQM) analysis. The geometry optimization, vibrational wavenumber and intensity calculations of free and coordinated 2,2'-bipyridine were carried out with the Gaussian03 program package by using Hartree-Fock (HF) and Density Functional Theory (DFT) with B3LYP functional and 6-31G (d,p) basis set. The total energy distributions (TED) of the vibrational modes were calculated by using Scaled Quantum Mechanical (SQM) analysis. Fundamentals were characterised by their total energy distributions. Coordination sensitive modes of 2,2'-bipyridine were determined.

  8. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  10. New approach to evaluate rotation of cervical vertebrae

    NASA Astrophysics Data System (ADS)

    Hahn, Matthias

    2001-07-01

    Functional deficits after whiplash injury can be analyzed with a quite novel radiologic method by examination of joint-blocks in C0/1 and C1/2. Thereto the movability of C0, C1 and C2 is determined with three spiral CT-scans of the patient's cervical spine. One series in neutral and one in maximal active lateral right and left rotation each. Previous methods were slice based and time consuming when manually evaluated. We propose a new approach to a computation of these angles in 3D. After a threshold segmentation of bone tissue, a rough 2D classification takes place for C0, C1 and C2 in each rotation series. The center of an axial rotation for each vertebra is gained from the approximation of its center of gravity. The rotation itself is estimated by a cross-correlation of the radial distance functions. From the previous rotation the results are taken to initialize a 3D matching algorithm based on the sum of squared differences in intensity. The optimal match of the vertebrae is computed by means of the multidimensional Powell minimization algorithm. The three translational and three rotational components build a six-dimensional search-space. The vertebrae detection and rotation computation is done fully automatic.

  11. Wave height data assimilation using non-stationary kriging

    NASA Astrophysics Data System (ADS)

    Tolosana-Delgado, R.; Egozcue, J. J.; Sáchez-Arcilla, A.; Gómez, J.

    2011-03-01

    Data assimilation into numerical models should be both computationally fast and physically meaningful, in order to be applicable in online environmental surveillance. We present a way to improve assimilation for computationally intensive models, based on non-stationary kriging and a separable space-time covariance function. The method is illustrated with significant wave height data. The covariance function is expressed as a collection of fields: each one is obtained as the empirical covariance between the studied property (significant wave height in log-scale) at a pixel where a measurement is located (a wave-buoy is available) and the same parameter at every other pixel of the field. These covariances are computed from the available history of forecasts. The method provides a set of weights, that can be mapped for each measuring location, and that do not vary with time. Resulting weights may be used in a weighted average of the differences between the forecast and measured parameter. In the case presented, these weights may show long-range connection patterns, such as between the Catalan coast and the eastern coast of Sardinia, associated to common prevailing meteo-oceanographic conditions. When such patterns are considered as non-informative of the present situation, it is always possible to diminish their influence by relaxing the covariance maps.

  12. Factors influencing hand/eye synchronicity in the computer age.

    PubMed

    Grant, A H

    1992-09-01

    In using a computer, the relation of vision to hand/finger actuated keyboard usage in performing fine motor-coordinated functions is influenced by the physical location, size, and collective placement of the keys. Traditional nonprehensile flat/rectangular keyboard applications usually require a high and nearly constant level of visual attention. Biometrically shaped keyboards would allow for prehensile hand-posturing, thus affording better tactile familiarity with the keys, requiring less intense and less constant level of visual attention to the task, and providing a greater measure of freedom from having to visualize the key(s). Workpace and related physiological changes, aging, onset of monocularization (intermittent lapsing of binocularity for near vision) that accompanies presbyopia, tool colors, and background contrast are factors affecting constancy of visual attention to task performance. Capitas extension, excessive excyclotorsion, and repetitive strain injuries (such as carpal tunnel syndrome) are common and debilitating concomitants to computer usage. These problems can be remedied by improved keyboard design. The salutary role of mnemonics in minimizing visual dependency is discussed.

  13. Computational Intelligence for Medical Imaging Simulations.

    PubMed

    Chang, Victor

    2017-11-25

    This paper describes how to simulate medical imaging by computational intelligence to explore areas that cannot be easily achieved by traditional ways, including genes and proteins simulations related to cancer development and immunity. This paper has presented simulations and virtual inspections of BIRC3, BIRC6, CCL4, KLKB1 and CYP2A6 with their outputs and explanations, as well as brain segment intensity due to dancing. Our proposed MapReduce framework with the fusion algorithm can simulate medical imaging. The concept is very similar to the digital surface theories to simulate how biological units can get together to form bigger units, until the formation of the entire unit of biological subject. The M-Fusion and M-Update function by the fusion algorithm can achieve a good performance evaluation which can process and visualize up to 40 GB of data within 600 s. We conclude that computational intelligence can provide effective and efficient healthcare research offered by simulations and visualization.

  14. Impedance computations and beam-based measurements: A problem of discrepancy

    NASA Astrophysics Data System (ADS)

    Smaluk, Victor

    2018-04-01

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictions based on the computed impedance budgets show a significant discrepancy. Three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.

  15. A parallel computing engine for a class of time critical processes.

    PubMed

    Nabhan, T M; Zomaya, A Y

    1997-01-01

    This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.

  16. Unsteady thermal blooming of intense laser beams

    NASA Astrophysics Data System (ADS)

    Ulrich, J. T.; Ulrich, P. B.

    1980-01-01

    A four dimensional (three space plus time) computer program has been written to compute the nonlinear heating of a gas by an intense laser beam. Unsteady, transient cases are capable of solution and no assumption of a steady state need be made. The transient results are shown to asymptotically approach the steady-state results calculated by the standard three dimensional thermal blooming computer codes. The report discusses the physics of the laser-absorber interaction, the numerical approximation used, and comparisons with experimental data. A flowchart is supplied in the appendix to the report.

  17. MSFC crack growth analysis computer program, version 2 (users manual)

    NASA Technical Reports Server (NTRS)

    Creager, M.

    1976-01-01

    An updated version of the George C. Marshall Space Flight Center Crack Growth Analysis Program is described. The updated computer program has significantly expanded capabilities over the original one. This increased capability includes an extensive expansion of the library of stress intensity factors, plotting capability, increased design iteration capability, and the capability of performing proof test logic analysis. The technical approaches used within the computer program are presented, and the input and output formats and options are described. Details of the stress intensity equations, example data, and example problems are presented.

  18. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2017-08-01

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  19. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  20. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less

  1. Electronic Circular Dichroism of [16]Helicene With Simplified TD-DFT: Beyond the Single Structure Approach.

    PubMed

    Bannwarth, Christoph; Seibert, Jakob; Grimme, Stefan

    2016-05-01

    The electronic circular dichroism (ECD) spectrum of the recently synthesized [16]helicene and a derivative comprising two triisopropylsilyloxy protection groups was computed by means of the very efficient simplified time-dependent density functional theory (sTD-DFT) approach. Different from many previous ECD studies of helicenes, nonequilibrium structure effects were accounted for by computing ECD spectra on "snapshots" obtained from a molecular dynamics (MD) simulation including solvent molecules. The trajectories are based on a molecule specific classical potential as obtained from the recently developed quantum chemically derived force field (QMDFF) scheme. The reduced computational cost in the MD simulation due to the use of the QMDFF (compared to ab-initio MD) as well as the sTD-DFT approach make realistic spectral simulations feasible for these compounds that comprise more than 100 atoms. While the ECD spectra of [16]helicene and its derivative computed vertically on the respective gas phase, equilibrium geometries show noticeable differences, these are "washed" out when nonequilibrium structures are taken into account. The computed spectra with two recommended density functionals (ωB97X and BHLYP) and extended basis sets compare very well with the experimental one. In addition we provide an estimate for the missing absolute intensities of the latter. The approach presented here could also be used in future studies to capture nonequilibrium effects, but also to systematically average ECD spectra over different conformations in more flexible molecules. Chirality 28:365-369, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. An updated climatology of explosive cyclones using alternative measures of cyclone intensity

    NASA Astrophysics Data System (ADS)

    Hanley, J.; Caballero, R.

    2009-04-01

    Using a novel cyclone tracking and identification method, we compute a climatology of explosively intensifying cyclones or ‘bombs' using the ERA-40 and ERA-Interim datasets. Traditionally, ‘bombs' have been identified using a central pressure deepening rate criterion (Sanders and Gyakum, 1980). We investigate alternative methods of capturing such extreme cyclones. These methods include using the maximum wind contained within the cyclone, and using a potential vorticity column measure within such systems, as a measure of intensity. Using the different measures of cyclone intensity, we construct and intercompare maps of peak cyclone intensity. We also compute peak intensity probability distributions, and assess the evidence for the bi-modal distribution found by Roebber (1984). Finally, we address the question of the relationship between storm intensification rate and storm destructiveness: are ‘bombs' the most destructive storms?

  3. Biological 2-Input Decoder Circuit in Human Cells

    PubMed Central

    2015-01-01

    Decoders are combinational circuits that convert information from n inputs to a maximum of 2n outputs. This operation is of major importance in computing systems yet it is vastly underexplored in synthetic biology. Here, we present a synthetic gene network architecture that operates as a biological decoder in human cells, converting 2 inputs to 4 outputs. As a proof-of-principle, we use small molecules to emulate the two inputs and fluorescent reporters as the corresponding four outputs. The experiments are performed using transient transfections in human kidney embryonic cells and the characterization by fluorescence microscopy and flow cytometry. We show a clear separation between the ON and OFF mean fluorescent intensity states. Additionally, we adopt the integrated mean fluorescence intensity for the characterization of the circuit and show that this metric is more robust to transfection conditions when compared to the mean fluorescent intensity. To conclude, we present the first implementation of a genetic decoder. This combinational system can be valuable toward engineering higher-order circuits as well as accommodate a multiplexed interface with endogenous cellular functions. PMID:24694115

  4. Biological 2-input decoder circuit in human cells.

    PubMed

    Guinn, Michael; Bleris, Leonidas

    2014-08-15

    Decoders are combinational circuits that convert information from n inputs to a maximum of 2(n) outputs. This operation is of major importance in computing systems yet it is vastly underexplored in synthetic biology. Here, we present a synthetic gene network architecture that operates as a biological decoder in human cells, converting 2 inputs to 4 outputs. As a proof-of-principle, we use small molecules to emulate the two inputs and fluorescent reporters as the corresponding four outputs. The experiments are performed using transient transfections in human kidney embryonic cells and the characterization by fluorescence microscopy and flow cytometry. We show a clear separation between the ON and OFF mean fluorescent intensity states. Additionally, we adopt the integrated mean fluorescence intensity for the characterization of the circuit and show that this metric is more robust to transfection conditions when compared to the mean fluorescent intensity. To conclude, we present the first implementation of a genetic decoder. This combinational system can be valuable toward engineering higher-order circuits as well as accommodate a multiplexed interface with endogenous cellular functions.

  5. Cyber-Physical Trade-Offs in Distributed Detection Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.

    2010-01-01

    We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less

  6. Integral-moment analysis of the BATSE gamma-ray burst intensity distribution

    NASA Technical Reports Server (NTRS)

    Horack, John M.; Emslie, A. Gordon

    1994-01-01

    We have applied the technique of integral-moment analysis to the intensity distribution of the first 260 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory. This technique provides direct measurement of properties such as the mean, variance, and skewness of the convolved luminosity-number density distribution, as well as associated uncertainties. Using this method, one obtains insight into the nature of the source distributions unavailable through computation of traditional single parameters such as V/V(sub max)). If the luminosity function of the gamma-ray bursts is strongly peaked, giving bursts only a narrow range of luminosities, these results are then direct probes of the radial distribution of sources, regardless of whether the bursts are a local phenomenon, are distributed in a galactic halo, or are at cosmological distances. Accordingly, an integral-moment analysis of the intensity distribution of the gamma-ray bursts provides for the most complete analytic description of the source distribution available from the data, and offers the most comprehensive test of the compatibility of a given hypothesized distribution with observation.

  7. Anharmonic Effects on Vibrational Spectra Intensities: Infrared, Raman, Vibrational Circular Dichroism and Raman Optical Activity

    PubMed Central

    Bloino, Julien; Biczysko, Malgorzata; Barone, Vincenzo

    2017-01-01

    The aim of this paper is twofold. First, we want to report the extension of our virtual multifrequency spectrometer (VMS) to anharmonic intensities for Raman Optical Activity (ROA) with the full inclusion of first- and second-order resonances for both frequencies and intensities in the framework of the generalized second-order vibrational perturbation theory (GVPT2) for all kinds of vibrational spectroscopies. Then, from a more general point of view, we want to present and validate the performance of VMS for the parallel analysis of different vibrational spectra for medium-sized molecules (IR, Raman, VCD, ROA) including both mechanical and electric/magnetic anharmonicity. For the well-known methyloxirane benchmark, careful selection of density functional, basis set, and resonance tresholds permitted to reach qualitative and quantitative vis-à-vis comparison between experimental and computed band positions and shapes. Next, the whole series of halogenated azetidinones is analyzed, showing that it is now possible to interpret different spectra in terms of electronegativity, polarizability, and hindrance variation between closely related substituents, chiral spectroscopies being particular effective in this connection. PMID:26580121

  8. Scaling and efficiency of PRISM in adaptive simulations of turbulent premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonse, Shaheen R.; Bell, J.B.; Brown, N.J.

    1999-12-01

    The dominant computational cost in modeling turbulent combustion phenomena numerically with high fidelity chemical mechanisms is the time required to solve the ordinary differential equations associated with chemical kinetics. One approach to reducing that computational cost is to develop an inexpensive surrogate model that accurately represents evolution of chemical kinetics. One such approach, PRISM, develops a polynomial representation of the chemistry evolution in a local region of chemical composition space. This representation is then stored for later use. As the computation proceeds, the chemistry evolution for other points within the same region are computed by evaluating these polynomials instead ofmore » calling an ordinary differential equation solver. If initial data for advancing the chemistry is encountered that is not in any region for which a polynomial is defined, the methodology dynamically samples that region and constructs a new representation for that region. The utility of this approach is determined by the size of the regions over which the representation provides a good approximation to the kinetics and the number of these regions that are necessary to model the subset of composition space that is active during a simulation. In this paper, we assess the PRISM methodology in the context of a turbulent premixed flame in two dimensions. We consider a range of turbulent intensities ranging from weak turbulence that has little effect on the flame to strong turbulence that tears pockets of burning fluid from the main flame. For each case, we explore a range of sizes for the local regions and determine the scaling behavior as a function of region size and turbulent intensity.« less

  9. Computational Methods for Frictional Contact With Applications to the Space Shuttle Orbiter Nose-Gear Tire

    NASA Technical Reports Server (NTRS)

    Tanner, John A.

    1996-01-01

    A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.

  10. Computational methods for frictional contact with applications to the Space Shuttle orbiter nose-gear tire: Comparisons of experimental measurements and analytical predictions

    NASA Technical Reports Server (NTRS)

    Tanner, John A.

    1996-01-01

    A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical-loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.

  11. A monitor for the laboratory evaluation of control integrity in digital control systems operating in harsh electromagnetic environments

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe

    1992-01-01

    This paper presents a strategy for dynamically monitoring digital controllers in the laboratory for susceptibility to electromagnetic disturbances that compromise control integrity. The integrity of digital control systems operating in harsh electromagnetic environments can be compromised by upsets caused by induced transient electrical signals. Digital system upset is a functional error mode that involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. The motivation for this work is the need to develop tools and techniques that can be used in the laboratory to validate and/or certify critical aircraft controllers operating in electromagnetically adverse environments that result from lightning, high-intensity radiated fields (HIRF), and nuclear electromagnetic pulses (NEMP). The detection strategy presented in this paper provides dynamic monitoring of a given control computer for degraded functional integrity resulting from redundancy management errors, control calculation errors, and control correctness/effectiveness errors. In particular, this paper discusses the use of Kalman filtering, data fusion, and statistical decision theory in monitoring a given digital controller for control calculation errors.

  12. Big data driven cycle time parallel prediction for production planning in wafer manufacturing

    NASA Astrophysics Data System (ADS)

    Wang, Junliang; Yang, Jungang; Zhang, Jie; Wang, Xiaoxi; Zhang, Wenjun Chris

    2018-07-01

    Cycle time forecasting (CTF) is one of the most crucial issues for production planning to keep high delivery reliability in semiconductor wafer fabrication systems (SWFS). This paper proposes a novel data-intensive cycle time (CT) prediction system with parallel computing to rapidly forecast the CT of wafer lots with large datasets. First, a density peak based radial basis function network (DP-RBFN) is designed to forecast the CT with the diverse and agglomerative CT data. Second, the network learning method based on a clustering technique is proposed to determine the density peak. Third, a parallel computing approach for network training is proposed in order to speed up the training process with large scaled CT data. Finally, an experiment with respect to SWFS is presented, which demonstrates that the proposed CTF system can not only speed up the training process of the model but also outperform the radial basis function network, the back-propagation-network and multivariate regression methodology based CTF methods in terms of the mean absolute deviation and standard deviation.

  13. The magnetic particle in a box: Analytic and micromagnetic analysis of probe-localized spin wave modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adur, Rohan, E-mail: adur@physics.osu.edu; Du, Chunhui; Manuilov, Sergei A.

    2015-05-07

    The dipole field from a probe magnet can be used to localize a discrete spectrum of standing spin wave modes in a continuous ferromagnetic thin film without lithographic modification to the film. Obtaining the resonance field for a localized mode is not trivial due to the effect of the confined and inhomogeneous magnetization precession. We compare the results of micromagnetic and analytic methods to find the resonance field of localized modes in a ferromagnetic thin film, and investigate the accuracy of these methods by comparing with a numerical minimization technique that assumes Bessel function modes with pinned boundary conditions. Wemore » find that the micromagnetic technique, while computationally more intensive, reveals that the true magnetization profiles of localized modes are similar to Bessel functions with gradually decaying dynamic magnetization at the mode edges. We also find that an analytic solution, which is simple to implement and computationally much faster than other methods, accurately describes the resonance field of localized modes when exchange fields are negligible, and demonstrating the accessibility of localized mode analysis.« less

  14. Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*

    PubMed Central

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.

    2015-01-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363

  15. Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.

    PubMed

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L

    2015-02-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  16. Orbital breathing effects in the computation of x-ray d -ion spectra in solids by ab initio wave-function-based methods

    DOE PAGES

    Bogdanov, Nikolay A.; Bisogni, Valentina; Kraus, Roberto; ...

    2016-11-21

    In existing theoretical approaches to core-level excitations of transition-metal ions in solids relaxation and polarization effects due to the inner core hole are often ignored or described phenomenologically. Here, we set up an ab initio computational scheme that explicitly accounts for such physics in the calculation of x-ray absorption and resonant inelastic x-ray scattering spectra. Good agreement is found with experimental transition-metal L-edge data for the strongly correlated d 9 cuprate Li 2CuO 2, for which we also determine the absolute scattering intensities. The newly developed methodology opens the way for the investigation of even more complex d n electronicmore » structures of group VI B to VIII B correlated oxide compounds.« less

  17. Experimental investigation and numerical modelling of positive corona discharge: ozone generation

    NASA Astrophysics Data System (ADS)

    Yanallah, K; Pontiga, F; Fernández-Rueda, A; Castellanos, A

    2009-03-01

    The spatial distribution of the species generated in a wire-cylinder positive corona discharge in pure oxygen has been computed using a plasma chemistry model that includes the most significant reactions between electrons, ions, atoms and molecules. The plasma chemistry model is included in the continuity equations of each species, which are coupled with Poisson's equation for the electric field and the energy conservation equation for the gas temperature. The current-voltage characteristic measured in the experiments has been used as an input data to the numerical simulation. The numerical model is able to reproduce the basic structure of the positive corona discharge and highlights the importance of Joule heating on ozone generation. The average ozone density has been computed as a function of current intensity and compared with the experimental measurements of ozone concentration determined by UV absorption spectroscopy.

  18. Laser pulse induced multi-exciton dynamics in molecular systems

    NASA Astrophysics Data System (ADS)

    Wang, Luxia; May, Volkhard

    2018-03-01

    Ultrafast optical excitation of an arrangement of identical molecules is analyzed theoretically. The computations are particularly dedicated to molecules where the excitation energy into the second excited singlet state E(S 2) - E(S 0) is larger than twice the excitation energy into the first excited singlet state E(S 1) - E(S 0). Then, exciton-exciton annihilation is diminished and resonant and intensive excitation may simultaneously move different molecules into their first excited singlet state | {S}1> . To describe the temporal evolution of the thus created multi-exciton state a direct computation of the related wave function is circumvented. Instead, we derive equations of motion for expectation values formed by different arrangements of single-molecule transition operators | {S}1> < {S}0| . First simulation results are presented and the approximate treatment suggested recently in 2016 Phys. Rev. B 94 195413 is evaluated.

  19. Cryptanalysis and security enhancement of optical cryptography based on computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Yao, Jianbin; Liu, Xuemei; Zhou, Xin; Li, Zhongyang

    2016-04-01

    Optical cryptography based on computational ghost imaging (CGI) has attracted much attention of researchers because it encrypts plaintext into a random intensity vector rather than complexed-valued function. This promising feature of the CGI-based cryptography reduces the amount of data to be transmitted and stored and therefore brings convenience in practice. However, we find that this cryptography is vulnerable to chosen-plaintext attack because of the linear relationship between the input and output of the encryption system, and three feasible strategies are proposed to break it in this paper. Even though a large number of plaintexts need to be chosen in these attack methods, it means that this cryptography still exists security risks. To avoid these attacks, a security enhancement method utilizing an invertible matrix modulation is further discussed and the feasibility is verified by numerical simulations.

  20. Statistical and temporal irradiance fluctuations modeling for a ground-to-geostationary satellite optical link.

    PubMed

    Camboulives, A-R; Velluet, M-T; Poulenard, S; Saint-Antonin, L; Michau, V

    2018-02-01

    An optical communication link performance between the ground and a geostationary satellite can be impaired by scintillation, beam wandering, and beam spreading due to its propagation through atmospheric turbulence. These effects on the link performance can be mitigated by tracking and error correction codes coupled with interleaving. Precise numerical tools capable of describing the irradiance fluctuations statistically and of creating an irradiance time series are needed to characterize the benefits of these techniques and optimize them. The wave optics propagation methods have proven their capability of modeling the effects of atmospheric turbulence on a beam, but these are known to be computationally intensive. We present an analytical-numerical model which provides good results on the probability density functions of irradiance fluctuations as well as a time series with an important saving of time and computational resources.

  1. Competitive and cooperative arm rehabilitation games played by a patient and unimpaired person: effects on motivation and exercise intensity.

    PubMed

    Goršič, Maja; Cikajlo, Imre; Novak, Domen

    2017-03-23

    People with chronic arm impairment should exercise intensely to regain their abilities, but frequently lack motivation, leading to poor rehabilitation outcome. One promising way to increase motivation is through interpersonal rehabilitation games, which allow patients to compete or cooperate together with other people. However, such games have mainly been evaluated with unimpaired subjects, and little is known about how they affect motivation and exercise intensity in people with chronic arm impairment. We designed four different arm rehabilitation games that are played by a person with arm impairment and their unimpaired friend, relative or occupational therapist. One is a competitive game (both people compete against each other), two are cooperative games (both people work together against the computer) and one is a single-player game (played only by the impaired person against the computer). The games were played by 29 participants with chronic arm impairment, of which 19 were accompanied by their friend or relative and 10 were accompanied by their occupational therapist. Each participant played all four games within a single session. Participants' subjective experience was quantified using the Intrinsic Motivation Inventory questionnaire after each game, as well as a final questionnaire about game preferences. Their exercise intensity was quantified using wearable inertial sensors that measured hand velocity in each game. Of the 29 impaired participants, 12 chose the competitive game as their favorite, 12 chose a cooperative game, and 5 preferred to exercise alone. Participants who chose the competitive game as their favorite showed increased motivation and exercise intensity in that game compared to other games. Participants who chose a cooperative game as their favorite also showed increased motivation in cooperative games, but not increased exercise intensity. Since both motivation and intensity are positively correlated with rehabilitation outcome, competitive games have high potential to lead to functional improvement and increased quality of life for patients compared to conventional rehabilitation exercises. Cooperative games do not increase exercise intensity, but could still increase motivation of patients who do not enjoy competition. However, such games need to be tested in longer, multisession studies to determine whether the observed increases in motivation and exercise intensity persist over a longer period of time and whether they positively affect rehabilitation outcome. The study is not a clinical trial. While human subjects are involved, they participate in a single-session evaluation of a rehabilitation game rather than a full rehabilitation intervention, and no health outcomes are examined.

  2. Calculation of stress intensity factors in an isotropic multicracked plate: Part 2: Symbolic/numeric implementation

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Binienda, W. K.; Tan, H. Q.; Xu, M. H.

    1992-01-01

    Analytical derivations of stress intensity factors (SIF's) of a multicracked plate can be complex and tedious. Recent advances, however, in intelligent application of symbolic computation can overcome these difficulties and provide the means to rigorously and efficiently analyze this class of problems. Here, the symbolic algorithm required to implement the methodology described in Part 1 is presented. The special problem-oriented symbolic functions to derive the fundamental kernels are described, and the associated automatically generated FORTRAN subroutines are given. As a result, a symbolic/FORTRAN package named SYMFRAC, capable of providing accurate SIF's at each crack tip, was developed and validated. Simple illustrative examples using SYMFRAC show the potential of the present approach for predicting the macrocrack propagation path due to existing microcracks in the vicinity of a macrocrack tip, when the influence of the microcrack's location, orientation, size, and interaction are taken into account.

  3. Simultaneous detection of iodine and iodide on boron doped diamond electrodes.

    PubMed

    Fierro, Stéphane; Comninellis, Christos; Einaga, Yasuaki

    2013-01-15

    Individual and simultaneous electrochemical detection of iodide and iodine has been performed via cyclic voltammetry on boron doped diamond (BDD) electrodes in a 1M NaClO(4) (pH 8) solution, representative of typical environmental water conditions. It is feasible to compute accurate calibration curve for both compounds using cyclic voltammetry measurements by determining the peak current intensities as a function of the concentration. A lower detection limit of about 20 μM was obtained for iodide and 10 μM for iodine. Based on the comparison between the peak current intensities reported during the oxidation of KI, it is probable that iodide (I(-)) is first oxidized in a single step to yield iodine (I(2)). The latter is further oxidized to obtain IO(3)(-). This technique, however, did not allow for a reasonably accurate detection of iodate (IO(3)(-)) on a BDD electrode. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Social, Organizational, and Contextual Characteristics of Clinical Decision Support Systems for Intensive Insulin Therapy: A Literature Review and Case Study

    PubMed Central

    Campion, Thomas R.; Waitman, Lemuel R.; May, Addison K.; Ozdas, Asli; Lorenzi, Nancy M.; Gadd, Cynthia S.

    2009-01-01

    Introduction: Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. Results: This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Discussion: Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. Conclusion: This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. PMID:19815452

  5. Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; García, Sebastián Gimeno

    2013-05-01

    Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.

  6. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity

    PubMed Central

    2013-01-01

    Background Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD). The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. Methods A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Results Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). Conclusions The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users. PMID:23915209

  7. Mn K-Edge X-ray Absorption Studies of Oxo- and Hydroxo-manganese(IV) Complexes: Experimental and Theoretical Insights into Pre-Edge Properties

    PubMed Central

    2015-01-01

    Mn K-edge X-ray absorption spectroscopy (XAS) was used to gain insights into the geometric and electronic structures of [MnII(Cl)2(Me2EBC)], [MnIV(OH)2(Me2EBC)]2+, and [MnIV(O)(OH)(Me2EBC)]+, which are all supported by the tetradentate, macrocyclic Me2EBC ligand (Me2EBC = 4,11-dimethyl-1,4,8,11-tetraazabicyclo[6.6.2]hexadecane). Analysis of extended X-ray absorption fine structure (EXAFS) data for [MnIV(O)(OH)(Me2EBC)]+ revealed Mn–O scatterers at 1.71 and 1.84 Å and Mn–N scatterers at 2.11 Å, providing the first unambiguous support for the formulation of this species as an oxohydroxomanganese(IV) adduct. EXAFS-determined structural parameters for [MnII(Cl)2(Me2EBC)] and [MnIV(OH)2(Me2EBC)]2+ are consistent with previously reported crystal structures. The Mn pre-edge energies and intensities of these complexes were examined within the context of data for other oxo- and hydroxomanganese(IV) adducts, and time-dependent density functional theory (TD-DFT) computations were used to predict pre-edge properties for all compounds considered. This combined experimental and computational analysis revealed a correlation between the Mn–O(H) distances and pre-edge peak areas of MnIV=O and MnIV–OH complexes, but this trend was strongly modulated by the MnIV coordination geometry. Mn 3d-4p mixing, which primarily accounts for the pre-edge intensities, is not solely a function of the Mn–O(H) bond length; the coordination geometry also has a large effect on the distribution of pre-edge intensity. For tetragonal MnIV=O centers, more than 90% of the pre-edge intensity comes from excitations to the Mn=O σ* MO. Trigonal bipyramidal oxomanganese(IV) centers likewise feature excitations to the Mn=O σ* molecular orbital (MO) but also show intense transitions to 3dx2–y2 and 3dxy MOs because of enhanced 3d-4px,y mixing. This gives rise to a broader pre-edge feature for trigonal MnIV=O adducts. These results underscore the importance of reporting experimental pre-edge areas rather than peak heights. Finally, the TD-DFT method was applied to understand the pre-edge properties of a recently reported S = 1 MnV=O adduct; these findings are discussed within the context of previous examinations of oxomanganese(V) complexes. PMID:24901026

  8. Mn K-edge X-ray absorption studies of oxo- and hydroxo-manganese(IV) complexes: experimental and theoretical insights into pre-edge properties.

    PubMed

    Leto, Domenick F; Jackson, Timothy A

    2014-06-16

    Mn K-edge X-ray absorption spectroscopy (XAS) was used to gain insights into the geometric and electronic structures of [Mn(II)(Cl)2(Me2EBC)], [Mn(IV)(OH)2(Me2EBC)](2+), and [Mn(IV)(O)(OH)(Me2EBC)](+), which are all supported by the tetradentate, macrocyclic Me2EBC ligand (Me2EBC = 4,11-dimethyl-1,4,8,11-tetraazabicyclo[6.6.2]hexadecane). Analysis of extended X-ray absorption fine structure (EXAFS) data for [Mn(IV)(O)(OH)(Me2EBC)](+) revealed Mn-O scatterers at 1.71 and 1.84 Å and Mn-N scatterers at 2.11 Å, providing the first unambiguous support for the formulation of this species as an oxohydroxomanganese(IV) adduct. EXAFS-determined structural parameters for [Mn(II)(Cl)2(Me2EBC)] and [Mn(IV)(OH)2(Me2EBC)](2+) are consistent with previously reported crystal structures. The Mn pre-edge energies and intensities of these complexes were examined within the context of data for other oxo- and hydroxomanganese(IV) adducts, and time-dependent density functional theory (TD-DFT) computations were used to predict pre-edge properties for all compounds considered. This combined experimental and computational analysis revealed a correlation between the Mn-O(H) distances and pre-edge peak areas of Mn(IV)═O and Mn(IV)-OH complexes, but this trend was strongly modulated by the Mn(IV) coordination geometry. Mn 3d-4p mixing, which primarily accounts for the pre-edge intensities, is not solely a function of the Mn-O(H) bond length; the coordination geometry also has a large effect on the distribution of pre-edge intensity. For tetragonal Mn(IV)═O centers, more than 90% of the pre-edge intensity comes from excitations to the Mn═O σ* MO. Trigonal bipyramidal oxomanganese(IV) centers likewise feature excitations to the Mn═O σ* molecular orbital (MO) but also show intense transitions to 3dx(2)-y(2) and 3dxy MOs because of enhanced 3d-4px,y mixing. This gives rise to a broader pre-edge feature for trigonal Mn(IV)═O adducts. These results underscore the importance of reporting experimental pre-edge areas rather than peak heights. Finally, the TD-DFT method was applied to understand the pre-edge properties of a recently reported S = 1 Mn(V)═O adduct; these findings are discussed within the context of previous examinations of oxomanganese(V) complexes.

  9. Race, gender, and information technology use: the new digital divide.

    PubMed

    Jackson, Linda A; Zhao, Yong; Kolenic, Anthony; Fitzgerald, Hiram E; Harold, Rena; Von Eye, Alexander

    2008-08-01

    This research examined race and gender differences in the intensity and nature of IT use and whether IT use predicted academic performance. A sample of 515 children (172 African Americans and 343 Caucasian Americans), average age 12 years old, completed surveys as part of their participation in the Children and Technology Project. Findings indicated race and gender differences in the intensity of IT use; African American males were the least intense users of computers and the Internet, and African American females were the most intense users of the Internet. Males, regardless of race, were the most intense videogame players, and females, regardless of race, were the most intense cell phone users. IT use predicted children's academic performance. Length of time using computers and the Internet was a positive predictor of academic performance, whereas amount of time spent playing videogames was a negative predictor. Implications of the findings for bringing IT to African American males and bringing African American males to IT are discussed.

  10. IR Spectra of Different O2-Content Hemoglobin from Computational Study: Promising Detector of Hemoglobin Variant in Medical Diagnosis.

    PubMed

    Zhou, Su-Qin; Chen, Tu-Nan; Ji, Guang-Fu; Wang, En-Ren

    2017-06-01

    IR spectra of heme and different O 2 -content hemoglobin were studied by the quantum computation method at the molecule level. IR spectra of heme and different O 2 -content hemoglobin were quantificationally characterized from 0 to 100 THz. The IR spectra of oxy-heme and de-oxy-heme are obviously different at the frequency regions of 9.08-9.48, 38.38-39.78, 50.46-50.82, and 89.04-91.00 THz. At 24.72 THz, there exists the absorption peak for oxy-heme, whereas there is not the absorption peak for de-oxy-heme. Whether the heme contains Fe-O-O bond or not has the great influence on its IR spectra and vibration intensities of functional groups in the mid-infrared area. The IR adsorption peak shape changes hardly for different O 2 -content hemoglobin. However, there exist three frequency regions corresponding to the large change of IR adsorption intensities for containing-O 2 hemoglobin in comparison with de-oxy-hemoglobin, which are 11.08-15.93, 44.70-50.22, and 88.00-96.68 THz regions, respectively. The most differential values with IR intensity of different O 2 -content hemoglobin all exceed 1.0 × 10 4  L mol -1  cm -1 . With the increase of oxygen content, the absorption peak appears in the high-frequency region for the containing-O 2 hemoglobin in comparison with de-oxy-hemoglobin. The more the O 2 -content is, the greater the absorption peak is at the high-frequency region. The IR spectra of different O 2 -content hemoglobin are so obviously different in the mid-infrared region that it is very easy to distinguish the hemoglobin variant by means of IR spectra detector. IR spectra of hemoglobin from quantum computation can provide scientific basis and specific identification of hemoglobin variant resulting from different O 2 contents in medical diagnosis.

  11. Information Technology in Critical Care: Review of Monitoring and Data Acquisition Systems for Patient Care and Research

    PubMed Central

    De Georgia, Michael A.; Kaffashi, Farhad; Jacono, Frank J.; Loparo, Kenneth A.

    2015-01-01

    There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes. PMID:25734185

  12. Information technology in critical care: review of monitoring and data acquisition systems for patient care and research.

    PubMed

    De Georgia, Michael A; Kaffashi, Farhad; Jacono, Frank J; Loparo, Kenneth A

    2015-01-01

    There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes.

  13. Effective equilibrium states in mixtures of active particles driven by colored noise

    NASA Astrophysics Data System (ADS)

    Wittmann, René; Brader, J. M.; Sharma, A.; Marconi, U. Marini Bettolo

    2018-01-01

    We consider the steady-state behavior of pairs of active particles having different persistence times and diffusivities. To this purpose we employ the active Ornstein-Uhlenbeck model, where the particles are driven by colored noises with exponential correlation functions whose intensities and correlation times vary from species to species. By extending Fox's theory to many components, we derive by functional calculus an approximate Fokker-Planck equation for the configurational distribution function of the system. After illustrating the predicted distribution in the solvable case of two particles interacting via a harmonic potential, we consider systems of particles repelling through inverse power-law potentials. We compare the analytic predictions to computer simulations for such soft-repulsive interactions in one dimension and show that at linear order in the persistence times the theory is satisfactory. This work provides the toolbox to qualitatively describe many-body phenomena, such as demixing and depletion, by means of effective pair potentials.

  14. Permutation methods for the structured exploratory data analysis (SEDA) of familial trait values.

    PubMed

    Karlin, S; Williams, P T

    1984-07-01

    A collection of functions that contrast familial trait values between and across generations is proposed for studying transmission effects and other collateral influences in nuclear families. Two classes of structured exploratory data analysis (SEDA) statistics are derived from ratios of these functions. SEDA-functionals are the empirical cumulative distributions of the ratio of the two contrasts computed within each family. SEDA-indices are formed by first averaging the numerator and denominator contrasts separately over the population and then forming their ratio. The significance of SEDA results are determined by a spectrum of permutation techniques that selectively shuffle the trait values across families. The process systematically alters certain family structure relationships while keeping other familial relationships intact. The methodology is applied to five data examples of plasma total cholesterol concentrations, reported height values, dermatoglyphic pattern intensity index scores, measurements of dopamine-beta-hydroxylase activity, and psychometric cognitive test results.

  15. Crystal structure optimisation using an auxiliary equation of state

    NASA Astrophysics Data System (ADS)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron

    2015-11-01

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.

  16. [Landscape classification: research progress and development trend].

    PubMed

    Liang, Fa-Chao; Liu, Li-Ming

    2011-06-01

    Landscape classification is the basis of the researches on landscape structure, process, and function, and also, the prerequisite for landscape evaluation, planning, protection, and management, directly affecting the precision and practicability of landscape research. This paper reviewed the research progress on the landscape classification system, theory, and methodology, and summarized the key problems and deficiencies of current researches. Some major landscape classification systems, e. g. , LANMAP and MUFIC, were introduced and discussed. It was suggested that a qualitative and quantitative comprehensive classification based on the ideology of functional structure shape and on the integral consideration of landscape classification utility, landscape function, landscape structure, physiogeographical factors, and human disturbance intensity should be the major research directions in the future. The integration of mapping, 3S technology, quantitative mathematics modeling, computer artificial intelligence, and professional knowledge to enhance the precision of landscape classification would be the key issues and the development trend in the researches of landscape classification.

  17. Iterative pixelwise approach applied to computer-generated holograms and diffractive optical elements.

    PubMed

    Hsu, Wei-Feng; Lin, Shih-Chih

    2018-01-01

    This paper presents a novel approach to optimizing the design of phase-only computer-generated holograms (CGH) for the creation of binary images in an optical Fourier transform system. Optimization begins by selecting an image pixel with a temporal change in amplitude. The modulated image function undergoes an inverse Fourier transform followed by the imposition of a CGH constraint and the Fourier transform to yield an image function associated with the change in amplitude of the selected pixel. In iterations where the quality of the image is improved, that image function is adopted as the input for the next iteration. In cases where the image quality is not improved, the image function before the pixel changed is used as the input. Thus, the proposed approach is referred to as the pixelwise hybrid input-output (PHIO) algorithm. The PHIO algorithm was shown to achieve image quality far exceeding that of the Gerchberg-Saxton (GS) algorithm. The benefits were particularly evident when the PHIO algorithm was equipped with a dynamic range of image intensities equivalent to the amplitude freedom of the image signal. The signal variation of images reconstructed from the GS algorithm was 1.0223, but only 0.2537 when using PHIO, i.e., a 75% improvement. Nonetheless, the proposed scheme resulted in a 10% degradation in diffraction efficiency and signal-to-noise ratio.

  18. Receive Mode Analysis and Design of Microstrip Reflectarrays

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam

    2011-01-01

    Traditionally microstrip or printed reflectarrays are designed using the transmit mode technique. In this method, the size of each printed element is chosen so as to provide the required value of the reflection phase such that a collimated beam results along a given direction. The reflection phase of each printed element is approximated using an infinite array model. The infinite array model is an excellent engineering approximation for a large microstrip array since the size or orientation of elements exhibits a slow spatial variation. In this model, the reflection phase from a given printed element is approximated by that of an infinite array of elements of the same size and orientation when illuminated by a local plane wave. Thus the reflection phase is a function of the size (or orientation) of the element, the elevation and azimuth angles of incidence of a local plane wave, and polarization. Typically, one computes the reflection phase of the infinite array as a function of several parameters such as size/orientation, elevation and azimuth angles of incidence, and in some cases for vertical and horizontal polarization. The design requires the selection of the size/orientation of the printed element to realize the required phase by interpolating or curve fitting all the computed data. This is a substantially complicated problem, especially in applications requiring a computationally intensive commercial code to determine the reflection phase. In dual polarization applications requiring rectangular patches, one needs to determine the reflection phase as a function of five parameters (dimensions of the rectangular patch, elevation and azimuth angles of incidence, and polarization). This is an extremely complex problem. The new method employs the reciprocity principle and reaction concept, two well-known concepts in electromagnetics to derive the receive mode analysis and design techniques. In the "receive mode design" technique, the reflection phase is computed for a plane wave incident on the reflectarray from the direction of the beam peak. In antenna applications with a single collimated beam, this method is extremely simple since all printed elements see the same angles of incidence. Thus the number of parameters is reduced by two when compared to the transmit mode design. The reflection phase computation as a function of five parameters in the rectangular patch array discussed previously is reduced to a computational problem with three parameters in the receive mode. Furthermore, if the beam peak is in the broadside direction, the receive mode design is polarization independent and the reflection phase computation is a function of two parameters only. For a square patch array, it is a function of the size, one parameter only, thus making it extremely simple.

  19. Ray tracing on the MPP

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Generating graphics to faithfully represent information can be a computationally intensive task. A way of using the Massively Parallel Processor to generate images by ray tracing is presented. This technique uses sort computation, a method of performing generalized routing interspersed with computation on a single-instruction-multiple-data (SIMD) computer.

  20. Functional Image-Guided Radiotherapy Planning in Respiratory-Gated Intensity-Modulated Radiotherapy for Lung Cancer Patients With Chronic Obstructive Pulmonary Disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Tomoki, E-mail: tkkimura@hiroshima-u.ac.jp; Nishibuchi, Ikuno; Murakami, Yuji

    2012-03-15

    Purpose: To investigate the incorporation of functional lung image-derived low attenuation area (LAA) based on four-dimensional computed tomography (4D-CT) into respiratory-gated intensity-modulated radiotherapy (IMRT) or volumetric modulated arc therapy (VMAT) in treatment planning for lung cancer patients with chronic obstructive pulmonary disease (COPD). Methods and Materials: Eight lung cancer patients with COPD were the subjects of this study. LAA was generated from 4D-CT data sets according to CT values of less than than -860 Hounsfield units (HU) as a threshold. The functional lung image was defined as the area where LAA was excluded from the image of the total lung.more » Two respiratory-gated radiotherapy plans (70 Gy/35 fractions) were designed and compared in each patient as follows: Plan A was an anatomical IMRT or VMAT plan based on the total lung; Plan F was a functional IMRT or VMAT plan based on the functional lung. Dosimetric parameters (percentage of total lung volume irradiated with {>=}20 Gy [V20], and mean dose of total lung [MLD]) of the two plans were compared. Results: V20 was lower in Plan F than in Plan A (mean 1.5%, p = 0.025 in IMRT, mean 1.6%, p = 0.044 in VMAT) achieved by a reduction in MLD (mean 0.23 Gy, p = 0.083 in IMRT, mean 0.5 Gy, p = 0.042 in VMAT). No differences were noted in target volume coverage and organ-at-risk doses. Conclusions: Functional IGRT planning based on LAA in respiratory-guided IMRT or VMAT appears to be effective in preserving a functional lung in lung cancer patients with COPD.« less

  1. Deformable registration of CT and cone-beam CT with local intensity matching.

    PubMed

    Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2017-02-07

    Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.

  2. Deformable registration of CT and cone-beam CT with local intensity matching

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2017-02-01

    Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.

  3. A Note on Testing Mediated Effects in Structural Equation Models: Reconciling Past and Current Research on the Performance of the Test of Joint Significance

    ERIC Educational Resources Information Center

    Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.

    2016-01-01

    Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…

  4. An optical flow-based method for velocity field of fluid flow estimation

    NASA Astrophysics Data System (ADS)

    Głomb, Grzegorz; Świrniak, Grzegorz; Mroczka, Janusz

    2017-06-01

    The aim of this paper is to present a method for estimating flow-velocity vector fields using the Lucas-Kanade algorithm. The optical flow measurements are based on the Particle Image Velocimetry (PIV) technique, which is commonly used in fluid mechanics laboratories in both research institutes and industry. Common approaches for an optical characterization of velocity fields base on computation of partial derivatives of the image intensity using finite differences. Nevertheless, the accuracy of velocity field computations is low due to the fact that an exact estimation of spatial derivatives is very difficult in presence of rapid intensity changes in the PIV images, caused by particles having small diameters. The method discussed in this paper solves this problem by interpolating the PIV images using Gaussian radial basis functions. This provides a significant improvement in the accuracy of the velocity estimation but, more importantly, allows for the evaluation of the derivatives in intermediate points between pixels. Numerical analysis proves that the method is able to estimate even a separate vector for each particle with a 5× 5 px2 window, whereas a classical correlation-based method needs at least 4 particle images. With the use of a specialized multi-step hybrid approach to data analysis the method improves the estimation of the particle displacement far above 1 px.

  5. Nonlinear derating of high-intensity focused ultrasound beams using Gaussian modal sums.

    PubMed

    Dibaji, Seyed Ahmad Reza; Banerjee, Rupak K; Soneson, Joshua E; Myers, Matthew R

    2013-11-01

    A method is introduced for using measurements made in water of the nonlinear acoustic pressure field produced by a high-intensity focused ultrasound transducer to compute the acoustic pressure and temperature rise in a tissue medium. The acoustic pressure harmonics generated by nonlinear propagation are represented as a sum of modes having a Gaussian functional dependence in the radial direction. While the method is derived in the context of Gaussian beams, final results are applicable to general transducer profiles. The focal acoustic pressure is obtained by solving an evolution equation in the axial variable. The nonlinear term in the evolution equation for tissue is modeled using modal amplitudes measured in water and suitably reduced using a combination of "source derating" (experiments in water performed at a lower source acoustic pressure than in tissue) and "endpoint derating" (amplitudes reduced at the target location). Numerical experiments showed that, with proper combinations of source derating and endpoint derating, direct simulations of acoustic pressure and temperature in tissue could be reproduced by derating within 5% error. Advantages of the derating approach presented include applicability over a wide range of gains, ease of computation (a single numerical quadrature is required), and readily obtained temperature estimates from the water measurements.

  6. Twisting Anderson pseudospins with light: Quench dynamics in terahertz-pumped BCS superconductors

    NASA Astrophysics Data System (ADS)

    Chou, Yang-Zhi; Liao, Yunxiang; Foster, Matthew S.

    2017-03-01

    We study the preparation (pump) and the detection (probe) of far-from-equilibrium BCS superconductor dynamics in THz pump-probe experiments. In a recent experiment [R. Matsunaga, Y. I. Hamada, K. Makise, Y. Uzawa, H. Terai, Z. Wang, and R. Shimano, Phys. Rev. Lett. 111, 057002 (2013), 10.1103/PhysRevLett.111.057002], an intense monocycle THz pulse with center frequency ω ≃Δ was injected into a superconductor with BCS gap Δ ; the subsequent postpump evolution was detected via the optical conductivity. It was argued that nonlinear coupling of the pump to the Anderson pseudospins of the superconductor induces coherent dynamics of the Higgs (amplitude) mode Δ (t ) . We validate this picture in a two-dimensional BCS model with a combination of exact numerics and the Lax reduction method, and we compute the nonequilibrium phase diagram as a function of the pump intensity. The main effect of the pump is to scramble the orientations of Anderson pseudospins along the Fermi surface by twisting them in the x y plane. We show that more intense pump pulses can induce a far-from-equilibrium phase of gapless superconductivity ("phase I"), originally predicted in the context of interaction quenches in ultracold atoms. We show that the THz pump method can reach phase I at much lower energy densities than an interaction quench, and we demonstrate that Lax reduction (tied to the integrability of the BCS Hamiltonian) provides a general quantitative tool for computing coherent BCS dynamics. We also calculate the Mattis-Bardeen optical conductivity for the nonequilibrium states discussed here.

  7. Objective straylight assessment of the human eye with a novel device

    NASA Astrophysics Data System (ADS)

    Schramm, Stefan; Schikowski, Patrick; Lerm, Elena; Kaeding, André; Klemm, Matthias; Haueisen, Jens; Baumgarten, Daniel

    2016-03-01

    Forward scattered light from the anterior segment of the human eye can be measured by Shack-Hartmann (SH) wavefront aberrometers with limited visual angle. We propose a novel Point Spread Function (PSF) reconstruction algorithm based on SH measurements with a novel measurement devise to overcome these limitations. In our optical setup, we use a Digital Mirror Device as variable field stop, which is conventionally a pinhole suppressing scatter and reflections. Images with 21 different stop diameters were captured and from each image the average subaperture image intensity and the average intensity of the pupil were computed. The 21 intensities represent integral values of the PSF which is consequently reconstructed by derivation with respect to the visual angle. A generalized form of the Stiles-Holladay-approximation is fitted to the PSF resulting in a stray light parameter Log(IS). Additionaly the transmission loss of eye is computed. For the proof of principle, a study on 13 healthy young volunteers was carried out. Scatter filters were positioned in front of the volunteer's eye during C-Quant and scatter measurements to generate straylight emulating scatter in the lens. The straylight parameter is compared to the C-Quant measurement parameter Log(ISC) and scatter density of the filters SDF with a partial correlation. Log(IS) shows significant correlation with the SDF and Log(ISC). The correlation is more prominent between Log(IS) combined with the transmission loss and the SDF and Log(ISC). Our novel measurement and reconstruction technique allow for objective stray light analysis of visual angles up to 4 degrees.

  8. An animal-to-human scaling law for blast-induced traumatic brain injury risk assessment.

    PubMed

    Jean, Aurélie; Nyein, Michelle K; Zheng, James Q; Moore, David F; Joannopoulos, John D; Radovitzky, Raúl

    2014-10-28

    Despite recent efforts to understand blast effects on the human brain, there are still no widely accepted injury criteria for humans. Recent animal studies have resulted in important advances in the understanding of brain injury due to intense dynamic loads. However, the applicability of animal brain injury results to humans remains uncertain. Here, we use advanced computational models to derive a scaling law relating blast wave intensity to the mechanical response of brain tissue across species. Detailed simulations of blast effects on the brain are conducted for different mammals using image-based biofidelic models. The intensity of the stress waves computed for different external blast conditions is compared across species. It is found that mass scaling, which successfully estimates blast tolerance of the thorax, fails to capture the brain mechanical response to blast across mammals. Instead, we show that an appropriate scaling variable must account for the mass of protective tissues relative to the brain, as well as their acoustic impedance. Peak stresses transmitted to the brain tissue by the blast are then shown to be a power function of the scaling parameter for a range of blast conditions relevant to TBI. In particular, it is found that human brain vulnerability to blast is higher than for any other mammalian species, which is in distinct contrast to previously proposed scaling laws based on body or brain mass. An application of the scaling law to recent experiments on rabbits furnishes the first physics-based injury estimate for blast-induced TBI in humans.

  9. The Application Design of Solar Radio Spectrometer Based on FPGA

    NASA Astrophysics Data System (ADS)

    Du, Q. F.; Chen, R. J.; Zhao, Y. C.; Feng, S. W.; Chen, Y.; Song, Y.

    2017-10-01

    The Solar radio spectrometer is the key instrument to observe solar radio. By programing the computer software, we control the AD signal acquisition card which is based on FPGA to get a mass of data. The data are transferred by using PCI-E port. This program has realized the function of timing data collection, finding data in specific time and controlling acquisition meter in real time. It can also map the solar radio power intensity graph. By doing the experiment, we verify the reliability of solar radio spectrum instrument, in the meanwhile, the instrument simplifies the operation in observing the sun.

  10. Virtual reality applications for motor rehabilitation after stroke.

    PubMed

    Sisto, Sue Ann; Forrest, Gail F; Glendinning, Diana

    2002-01-01

    Hemiparesis is the primary physical impairment underlying functional disability after stroke. A goal of rehabilitation is to enhance motor skill acquisition, which is a direct result of practice. However, frequency and duration of practice are limited in rehabilitation. Virtual reality (VR) is a computer technology that simulates real-life learning while providing augmented feedback and increased frequency, duration, and intensity of practiced tasks. The rate and extent of relearning of motor tasks could affect the duration, effectiveness, and cost of patient care. The purpose of this article is to review the use of VR training for motor rehabilitation after stroke.

  11. Optimum satellite relay positions with application to a TDRS-1 Indian Ocean relay

    NASA Technical Reports Server (NTRS)

    Jackson, A. H.; Christopher, P.

    1994-01-01

    An Indian Ocean satellite relay is examined. The relay satellite position is optimized by minimizing the sum of downlink and satellite to satellite link losses. Osculating orbital elements are used for fast intensive orbital computation. Integrated Van Vleck gaseous attenuation and a Crane rain model are used for downlink attenuation. Circular polarization losses on the satellite to satellite link are found dynamically. Space to ground link antenna pointing losses are included as a function of yaw ans spacecraft limits. Relay satellite positions between 90 to 100 degrees East are found attractive for further study.

  12. Chaotic oscillations and noise transformations in a simple dissipative system with delayed feedback

    NASA Astrophysics Data System (ADS)

    Zverev, V. V.; Rubinstein, B. Ya.

    1991-04-01

    We analyze the statistical behavior of signals in nonlinear circuits with delayed feedback in the presence of external Markovian noise. For the special class of circuits with intense phase mixing we develop an approach for the computation of the probability distributions and multitime correlation functions based on the random phase approximation. Both Gaussian and Kubo-Andersen models of external noise statistics are analyzed and the existence of the stationary (asymptotic) random process in the long-time limit is shown. We demonstrate that a nonlinear system with chaotic behavior becomes a noise amplifier with specific statistical transformation properties.

  13. A multi-channel coronal spectrophotometer.

    NASA Technical Reports Server (NTRS)

    Landman, D. A.; Orrall, F. Q.; Zane, R.

    1973-01-01

    We describe a new multi-channel coronal spectrophotometer system, presently being installed at Mees Solar Observatory, Mount Haleakala, Maui. The apparatus is designed to record and interpret intensities from many sections of the visible and near-visible spectral regions simultaneously, with relatively high spatial and temporal resolution. The detector, a thermoelectrically cooled silicon vidicon camera tube, has its central target area divided into a rectangular array of about 100,000 pixels and is read out in a slow-scan (about 2 sec/frame) mode. Instrument functioning is entirely under PDP 11/45 computer control, and interfacing is via the CAMAC system.

  14. Image Edge Extraction via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A. (Inventor); Klinko, Steve (Inventor)

    2008-01-01

    A computer-based technique for detecting edges in gray level digital images employs fuzzy reasoning to analyze whether each pixel in an image is likely on an edge. The image is analyzed on a pixel-by-pixel basis by analyzing gradient levels of pixels in a square window surrounding the pixel being analyzed. An edge path passing through the pixel having the greatest intensity gradient is used as input to a fuzzy membership function, which employs fuzzy singletons and inference rules to assigns a new gray level value to the pixel that is related to the pixel's edginess degree.

  15. Rehabilitation of hand in subacute tetraplegic patients based on brain computer interface and functional electrical stimulation: a randomised pilot study

    NASA Astrophysics Data System (ADS)

    Osuagwu, Bethel C. A.; Wallace, Leslie; Fraser, Mathew; Vuckovic, Aleksandra

    2016-12-01

    Objective. To compare neurological and functional outcomes between two groups of hospitalised patients with subacute tetraplegia. Approach. Seven patients received 20 sessions of brain computer interface (BCI) controlled functional electrical stimulation (FES) while five patients received the same number of sessions of passive FES for both hands. The neurological assessment measures were event related desynchronization (ERD) during movement attempt, Somatosensory evoked potential (SSEP) of the ulnar and median nerve; assessment of hand function involved the range of motion (ROM) of wrist and manual muscle test. Main results. Patients in both groups initially had intense ERD during movement attempt that was not restricted to the sensory-motor cortex. Following the treatment, ERD cortical activity restored towards the activity in able-bodied people in BCI-FES group only, remaining wide-spread in FES group. Likewise, SSEP returned in 3 patients in BCI-FES group, having no changes in FES group. The ROM of the wrist improved in both groups. Muscle strength significantly improved for both hands in BCI-FES group. For FES group, a significant improvement was noticed for right hand flexor muscles only. Significance. Combined BCI-FES therapy results in better neurological recovery and better improvement of muscle strength than FES alone. For spinal cord injured patients, BCI-FES should be considered as a therapeutic tool rather than solely a long-term assistive device for the restoration of a lost function.

  16. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  17. Direct recovery of regional tracer kinetics from temporally inconsistent dynamic ECT projections using dimension-reduced time-activity basis

    NASA Astrophysics Data System (ADS)

    Maltz, Jonathan S.

    2000-11-01

    We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.

  18. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    NASA Technical Reports Server (NTRS)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  19. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jun

    Our group has been working with ANL collaborators on the topic bridging the gap between parallel file system and local file system during the course of this project period. We visited Argonne National Lab -- Dr. Robert Ross's group for one week in the past summer 2007. We looked over our current project progress and planned the activities for the incoming years 2008-09. The PI met Dr. Robert Ross several times such as HEC FSIO workshop 08, SC08 and SC10. We explored the opportunities to develop a production system by leveraging our current prototype to (SOGP+PVFS) a new PVFS version.more » We delivered SOGP+PVFS codes to ANL PVFS2 group in 2008.We also talked about exploring a potential project on developing new parallel programming models and runtime systems for data-intensive scalable computing (DISC). The methodology is to evolve MPI towards DISC by incorporating some functions of Google MapReduce parallel programming model. More recently, we are together exploring how to leverage existing works to perform (1) coordination/aggregation of local I/O operations prior to movement over the WAN, (2) efficient bulk data movement over the WAN, (3) latency hiding techniques for latency-intensive operations. Since 2009, we start applying Hadoop/MapReduce to some HEC applications with LANL scientists John Bent and Salman Habib. Another on-going work is to improve checkpoint performance at I/O forwarding Layer for the Road Runner super computer with James Nuetz and Gary Gridder at LANL. Two senior undergraduates from our research group did summer internships about high-performance file and storage system projects in LANL since 2008 for consecutive three years. Both of them are now pursuing Ph.D. degree in our group and will be 4th year in the PhD program in Fall 2011 and go to LANL to advance two above-mentioned works during this winter break. Since 2009, we have been collaborating with several computer scientists (Gary Grider, John bent, Parks Fields, James Nunez, Hsing-Bung Chen, etc) from HPC5 and James Ahrens from Advanced Computing Laboratory in Los Alamos National Laboratory. We hold a weekly conference and/or video meeting on advancing works at two fronts: the hardware/software infrastructure of building large-scale data intensive cluster and research publications. Our group members assist in constructing several onsite LANL data intensive clusters. Two parties have been developing software codes and research papers together using both sides resources.« less

  1. Optimization of equivalent uniform dose using the L-curve criterion.

    PubMed

    Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R

    2007-10-07

    Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.

  2. Effectiveness of speech language therapy either alone or with add-on computer-based language therapy software (Malayalam version) for early post stroke aphasia: A feasibility study.

    PubMed

    Kesav, Praveen; Vrinda, S L; Sukumaran, Sajith; Sarma, P S; Sylaja, P N

    2017-09-15

    This study aimed to assess the feasibility of professional based conventional speech language therapy (SLT) either alone (Group A/less intensive) or assisted by novel computer based local language software (Group B/more intensive) for rehabilitation in early post stroke aphasia. Comprehensive Stroke Care Center of a tertiary health care institute situated in South India, with the study design being prospective open randomised controlled trial with blinded endpoint evaluation. This study recruited 24 right handed first ever acute ischemic stroke patients above 15years of age affecting middle cerebral artery territory within 90days of stroke onset with baseline Western Aphasia Battery (WAB) Aphasia Quotient (AQ) score of <93.8 between September 2013 and January 2016.The recruited subjects were block randomised into either Group A/less intensive or Group B/more intensive therapy arms, in order to receive 12 therapy sessions of conventional professional based SLT of 1h each in both groups, with an additional 12h of computer based language therapy in Group B over 4weeks on a thrice weekly basis, with a follow up WAB performed at four and twelve weeks after baseline assessment. The trial was registered with Clinical trials registry India [2016/08/0120121]. All the statistical analysis was carried out with IBM SPSS Statistics for Windows version 21. 20 subjects [14 (70%) Males; Mean age: 52.8years±SD12.04] completed the study (9 in the less intensive and 11 in the more intensive arm). The mean four weeks follow up AQ showed a significant improvement from the baseline in the total group (p value: 0.01). The rate of rise of AQ from the baseline to four weeks follow up (ΔAQ %) showed a significantly greater value for the less intensive treatment group as against the more intensive treatment group [155% (SD: 150; 95% CI: 34-275) versus 52% (SD: 42%; 95% CI: 24-80) respectively: p value: 0.053]. Even though the more intensive treatment arm incorporating combined professional based SLT and computer software based training fared poorer than the less intensive therapy group, this study nevertheless reinforces the feasibility of SLT in augmenting recovery of early post stroke aphasia. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A new spherical model for computing the radiation field available for photolysis and heating at twilight

    NASA Technical Reports Server (NTRS)

    Dahlback, Arne; Stamnes, Knut

    1991-01-01

    Accurate computation of atmospheric photodissociation and heating rates is needed in photochemical models. These quantities are proportional to the mean intensity of the solar radiation penetrating to various levels in the atmosphere. For large solar zenith angles a solution of the radiative transfer equation valid for a spherical atmosphere is required in order to obtain accurate values of the mean intensity. Such a solution based on a perturbation technique combined with the discrete ordinate method is presented. Mean intensity calculations are carried out for various solar zenith angles. These results are compared with calculations from a plane parallel radiative transfer model in order to assess the importance of using correct geometry around sunrise and sunset. This comparison shows, in agreement with previous investigations, that for solar zenith angles less than 90 deg adequate solutions are obtained for plane parallel geometry as long as spherical geometry is used to compute the direct beam attenuation; but for solar zenith angles greater than 90 deg this pseudospherical plane parallel approximation overstimates the mean intensity.

  4. Impedance computations and beam-based measurements: A problem of discrepancy

    DOE PAGES

    Smaluk, Victor

    2018-04-21

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less

  5. Impedance computations and beam-based measurements: A problem of discrepancy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smaluk, Victor

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less

  6. Statistical properties of the Strehl ratio as a function of pupil diameter and level of adaptive optics correction following atmospheric propagation.

    PubMed

    Shellan, Jeffrey B

    2004-08-01

    The propagation of an optical beam through atmospheric turbulence produces wave-front aberrations that can reduce the power incident on an illuminated target or degrade the image of a distant target. The purpose of the work described here was to determine by computer simulation the statistical properties of the normalized on-axis intensity--defined as (D/r0)2 SR--as a function of D/r0 and the level of adaptive optics (AO) correction, where D is the telescope diameter, r0 is the Fried coherence diameter, and SR is the Strehl ratio. Plots were generated of (D/r0)2 (SR) and sigmaSR/(SR), where (SR) and sigma(SR) are the mean and standard deviation, respectively, of the SR versus D/r0 for a wide range of both modal and zonal AO correction. The level of modal correction was characterized by the number of Zernike radial modes that were corrected. The amount of zonal AO correction was quantified by the number of actuators on the deformable mirror and the resolution of the Hartmann wave-front sensor. These results can be used to determine the optimum telescope diameter, in units of r0, as a function of the AO design. For the zonal AO model, we found that maximum on-axis intensity was achieved when the telescope diameter was sized so that the actuator spacing was equal to approximately 2r0. For modal correction, we found that the optimum value of D/r0 (maximum mean on-axis intensity) was equal to 1.79Nr + 2.86, where Nr is the highest Zernike radial mode corrected.

  7. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  8. Designing scalable product families by the radial basis function-high-dimensional model representation metamodelling technique

    NASA Astrophysics Data System (ADS)

    Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary

    2015-10-01

    Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.

  9. The Montage architecture for grid-enabled science processing of large, distributed datasets

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S .; Prince, Thomas; Berriman, Bruce G.; Good, John C.; Laity, Anastasia C.; Deelman, Ewa; Singh, Gurmeet; Su, Mei-Hui

    2004-01-01

    Montage is an Earth Science Technology Office (ESTO) Computational Technologies (CT) Round III Grand Challenge investigation to deploy a portable, compute-intensive, custom astronomical image mosaicking service for the National Virtual Observatory (NVO). Although Montage is developing a compute- and data-intensive service for the astronomy community, we are also helping to address a problem that spans both Earth and Space science, namely how to efficiently access and process multi-terabyte, distributed datasets. In both communities, the datasets are massive, and are stored in distributed archives that are, in most cases, remote from the available Computational resources. Therefore, state of the art computational grid technologies are a key element of the Montage portal architecture. This paper describes the aspects of the Montage design that are applicable to both the Earth and Space science communities.

  10. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less

  11. [Mobile computing in anaesthesiology and intensive care medicine. The practical relevance of portable digital assistants].

    PubMed

    Pazhur, R J; Kutter, B; Georgieff, M; Schraag, S

    2003-06-01

    Portable digital assistants (PDAs) may be of value to the anaesthesiologist as development in medical care is moving towards "bedside computing". Many different portable computers are currently available and it is now possible for the physician to carry a mobile computer with him all the time. It is data base, reference book, patient tracking help, date planner, computer, book, magazine, calculator and much more in one mobile device. With the help of a PDA, information that is required for our work may be available at all times and everywhere at the point of care within seconds. In this overview the possibilities for the use of PDAs in anaesthesia and intensive care medicine are discussed. Developments in other countries, possibilities in use but also problems such as data security and network technology are evaluated.

  12. Associations between psychosocial factors and pain intensity, physical functioning, and psychological functioning in patients with chronic pain: a cross-cultural comparison.

    PubMed

    Ferreira-Valente, Maria A; Pais-Ribeiro, José L; Jensen, Mark P

    2014-08-01

    Current models of chronic pain recognize that psychosocial factors influence pain and the effects of pain on daily life. The role of such factors has been widely studied on English-speaking individuals with chronic pain. It is possible that the associations between such factors and adjustment may be influenced by culture. This study sought to evaluate the importance of coping responses, self-efficacy beliefs, and social support to adjust to chronic pain in a sample of Portuguese patients, and discuss the findings with respect to their similarities and differences from findings of studies on English-speaking individuals. Measures of pain intensity and interference, physical and psychological functioning, coping responses, self-efficacy, and satisfaction with social support were administered to a sample of 324 Portuguese patients with chronic musculoskeletal pain. Univariate and multivariate analyses were computed. Findings were interpreted with respect to those from similar studies using English-speaking individuals. Coping responses and perceived social support were significantly associated with pain interference and both physical and psychological functioning; self-efficacy beliefs were significantly associated with all criterion variables. All coping responses, except for task persistence, were positively associated with pain interference and negatively associated with physical and psychological functioning, with the strongest associations found for catastrophizing, praying/hoping, guarding, resting, asking for assistance, and relaxation. The findings provide support for the importance of the psychosocial factors studied in terms of adjustment to chronic pain in Portuguese patients, and also suggest the possibility of some differences in the role of these factors due to culture.

  13. Elucidation of spin echo small angle neutron scattering correlation functions through model studies.

    PubMed

    Shew, Chwen-Yang; Chen, Wei-Ren

    2012-02-14

    Several single-modal Debye correlation functions to approximate part of the overall Debey correlation function of liquids are closely examined for elucidating their behavior in the corresponding spin echo small angle neutron scattering (SESANS) correlation functions. We find that the maximum length scale of a Debye correlation function is identical to that of its SESANS correlation function. For discrete Debye correlation functions, the peak of SESANS correlation function emerges at their first discrete point, whereas for continuous Debye correlation functions with greater width, the peak position shifts to a greater value. In both cases, the intensity and shape of the peak of the SESANS correlation function are determined by the width of the Debye correlation functions. Furthermore, we mimic the intramolecular and intermolecular Debye correlation functions of liquids composed of interacting particles based on a simple model to elucidate their competition in the SESANS correlation function. Our calculations show that the first local minimum of a SESANS correlation function can be negative and positive. By adjusting the spatial distribution of the intermolecular Debye function in the model, the calculated SESANS spectra exhibit the profile consistent with that of hard-sphere and sticky-hard-sphere liquids predicted by more sophisticated liquid state theory and computer simulation. © 2012 American Institute of Physics

  14. Liquid Structure of CO 2 –Reactive Aprotic Heterocyclic Anion Ionic Liquids from X-ray Scattering and Molecular Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheridan, Quintin R.; Oh, Seungmin; Morales-Collazo, Oscar

    2016-11-23

    A combination of X-ray scattering experiments and molecular dynamics simulations were conducted to investigate the structure of ionic liquids (ILs) which chemically bind CO 2. The structure functions were measured and computed for four different ILs consisting of two different phosphonium cations, triethyloctylphosphonium ([P 2228] +) and trihexyltetradecylphosphonium ([P 66614] +), paired with two different aprotic heterocyclic anions which chemically react with CO 2, 2-cyanopyrrolide, and 1,2,4-triazolide. Simulations were able to reproduce the experimental structure functions, and by deconstructing the simulated structure functions, further information on the liquid structure was obtained. All structure functions of the ILs studied had threemore » primary features which have been seen before in other ILs: a prepeak near 0.3–0.4 Å–1 corresponding to polar/nonpolar domain alternation, a charge alternation peak near 0.8 Å–1, and a peak near 1.5 Å–1 due to interactions of adjacent molecules. The liquid structure functions were only mildly sensitive to the specific anion and whether or not they were reacted with CO 2. Upon reacting with CO 2, small changes were observed in the structure functions of the [P 2228] + ILs, whereas virtually no change was observed upon reacting with CO 2 in the corresponding [P 66614] + ILs. When the [P 2228] + cation was replaced with the [P 66614] + cation, there was a significant increase in the intensities of the prepeak and adjacency interaction peak. While many of the liquid structure functions are similar, the actual liquid structures differ as demonstrated by computed spatial distribution functions.« less

  15. Social, organizational, and contextual characteristics of clinical decision support systems for intensive insulin therapy: a literature review and case study.

    PubMed

    Campion, Thomas R; Waitman, Lemuel R; May, Addison K; Ozdas, Asli; Lorenzi, Nancy M; Gadd, Cynthia S

    2010-01-01

    Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. Copyright (c) 2009. Published by Elsevier Ireland Ltd.

  16. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  17. Etch depth mapping of phase binary computer-generated holograms by means of specular spectroscopic scatterometry

    NASA Astrophysics Data System (ADS)

    Korolkov, Victor P.; Konchenko, Alexander S.; Cherkashin, Vadim V.; Mironnikov, Nikolay G.; Poleshchuk, Alexander G.

    2013-09-01

    Detailed analysis of etch depth map for phase binary computer-generated holograms intended for testing aspheric optics is a very important task. In particular, diffractive Fizeau null lenses need to be carefully tested for uniformity of etch depth. We offer a simplified version of the specular spectroscopic scatterometry method. It is based on the spectral properties of binary phase multi-order gratings. An intensity of zero order is a periodical function of illumination light wave number. The grating grooves depth can be calculated as it is inversely proportional to the period. Measurement in reflection allows one to increase the phase depth of the grooves by a factor of 2 and measure more precisely shallow phase gratings. Measurement uncertainty is mainly defined by the following parameters: shifts of the spectrum maximums that occur due to the tilted grooves sidewalls, uncertainty of light incidence angle measurement, and spectrophotometer wavelength error. It is theoretically and experimentally shown that the method we describe can ensure 1% error. However, fiber spectrometers are more convenient for scanning measurements of large area computer-generated holograms. Our experimental system for characterization of binary computer-generated holograms was developed using a fiber spectrometer.

  18. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  19. Light Intensity physical activity and sedentary behavior in relation to body mass index and grip strength in older adults: cross-sectional findings from the Lifestyle Interventions and Independence for Elders (LIFE) study.

    PubMed

    Bann, David; Hire, Don; Manini, Todd; Cooper, Rachel; Botoseneanu, Anda; McDermott, Mary M; Pahor, Marco; Glynn, Nancy W; Fielding, Roger; King, Abby C; Church, Timothy; Ambrosius, Walter T; Gill, Thomas M; Gill, Thomas

    2015-01-01

    Identifying modifiable determinants of fat mass and muscle strength in older adults is important given their impact on physical functioning and health. Light intensity physical activity and sedentary behavior are potential determinants, but their relations to these outcomes are poorly understood. We evaluated associations of light intensity physical activity and sedentary time-assessed both objectively and by self-report-with body mass index (BMI) and grip strength in a large sample of older adults. We used cross-sectional baseline data from 1130 participants of the Lifestyle Interventions and Independence for Elders (LIFE) study, a community-dwelling sample of relatively sedentary older adults (70-89 years) at heightened risk of mobility disability. Time spent sedentary and in light intensity activity were assessed using an accelerometer worn for 3-7 days (Actigraph GT3X) and by self-report. Associations between these exposures and measured BMI and grip strength were evaluated using linear regression. Greater time spent in light intensity activity and lower sedentary times were both associated with lower BMI. This was evident using objective measures of lower-light intensity, and both objective and self-reported measures of higher-light intensity activity. Time spent watching television was positively associated with BMI, while reading and computer use were not. Greater time spent in higher but not lower intensities of light activity (assessed objectively) was associated with greater grip strength in men but not women, while neither objectively assessed nor self-reported sedentary time was associated with grip strength. In this cross-sectional study, greater time spent in light intensity activity and lower sedentary times were associated with lower BMI. These results are consistent with the hypothesis that replacing sedentary activities with light intensity activities could lead to lower BMI levels and obesity prevalence among the population of older adults. However, longitudinal and experimental studies are needed to strengthen causal inferences.

  20. The relationship between loudness intensity functions and the click-ABR wave V latency.

    PubMed

    Serpanos, Y C; O'Malley, H; Gravel, J S

    1997-10-01

    To assess the relationship of loudness growth and the click-evoked auditory brain stem response (ABR) wave V latency-intensity function (LIF) in listeners with normal hearing or cochlear hearing loss. The effect of hearing loss configuration on the intensity functions was also examined. Behavioral and electrophysiological intensity functions were obtained using click stimuli of comparable intensities in listeners with normal hearing (Group I; n = 10), and cochlear hearing loss of flat (Group II; n = 10) or sloping (Group III; n = 10) configurations. Individual intensity functions were obtained from measures of loudness growth using the psychophysical methods of absolute magnitude estimation and production of loudness (geometrically averaged to provide the measured loudness function), and from the wave V latency measures of the ABR. Slope analyses for the behavioral and electrophysiological intensity functions were separately performed by group. The loudness growth functions for the groups with cochlear hearing loss approximated the normal function at high intensities, with overall slope values consistent with those reported from previous psychophysical research. The ABR wave V LIF for the group with a flat configuration of cochlear hearing loss approximated the normal function at high intensities, and was displaced parallel to the normal function for the group with sloping configuration. The relationship between the behavioral and electrophysiological intensity functions was examined at individual intensities across the range of the functions for each subject. A significant relationship was obtained between loudness and the ABR wave V LIFs for the groups with normal hearing and flat configuration of cochlear hearing loss; the association was not significant (p = 0.10) for the group with a sloping configuration of cochlear hearing loss. The results of this study established a relationship between loudness and the ABR wave V latency for listeners with normal hearing, and flat cochlear hearing loss. In listeners with a sloping configuration of cochlear hearing loss, the relationship was not significant. This suggests that the click-evoked ABR may be used to estimate loudness growth at least for individuals with normal hearing and those with a flat configuration of cochlear hearing loss. Predictive equations were derived to estimate loudness growth for these groups. The use of frequency-specific stimuli may provide more precise information on the nature of the relationship between loudness growth and the ABR wave V latency, particularly for listeners with sloping configurations of cochlear hearing loss.

  1. Tomography-guided palisade sacroiliac joint radiofrequency neurotomy versus celecoxib for ankylosing spondylitis: a open-label, randomized, and controlled trial.

    PubMed

    Zheng, Yongjun; Gu, Minghong; Shi, Dongping; Li, Mingli; Ye, Le; Wang, Xiangrui

    2014-09-01

    Sacroiliac joint (SIJ) pain is a common symptom in ankylosing spondylitis (AS). Palisade sacroiliac joint radiofrequency neurotomy (PSRN) is a novel treatment for the SIJ pain. In the current clinical trial, we treated AS patients with significant SIJ pain using PSRN under computed tomography guidance and compared the results with the celecoxib treatment. The current study included 155 AS patients. Patients were randomly assigned to receive PSRN or celecoxib treatment (400 mg/day for 24 weeks). The primary endpoint was global pain intensity in visual analog scale, at week 12. Secondary endpoints included pain intensity at week 24, disease activity, functional and mobility capacities, and adverse events at week 24. In comparison with the baseline collected immediately prior to the interventions, global pain intensity was significantly lower at both 12 and 24 weeks after the treatment in both arms. Pain reduction was more robust in the PSRN arm (by more than 1.9 and 2.2 cm at 12 and 24 weeks in comparison with the celecoxib arm, P < 0.0001 for both). The PSRN was also more effective in improving physical function and spinal mobility (P < 0.05 vs. celecoxib for both). Gastrointestional irritation was more frequent in the celecoxib arm than in the PSRN arm (P < 0.05). No severe complications were noted in either arm. PSRN is both efficacious and safe in managing SIJ pain in patients with AS.

  2. A high-speed, reconfigurable, channel- and time-tagged photon arrival recording system for intensity-interferometry and quantum optics experiments

    NASA Astrophysics Data System (ADS)

    Girish, B. S.; Pandey, Deepak; Ramachandran, Hema

    2017-08-01

    We present a compact, inexpensive multichannel module, APODAS (Avalanche Photodiode Output Data Acquisition System), capable of detecting 0.8 billion photons per second and providing real-time recording on a computer hard-disk, of channel- and time-tagged information of the arrival of upto 0.4 billion photons per second. Built around a Virtex-5 Field Programmable Gate Array (FPGA) unit, APODAS offers a temporal resolution of 5 nanoseconds with zero deadtime in data acquisition, utilising an efficient scheme for time and channel tagging and employing Gigabit ethernet for the transfer of data. Analysis tools have been developed on a Linux platform for multi-fold coincidence studies and time-delayed intensity interferometry. As illustrative examples, the second-order intensity correlation function ( g 2) of light from two commonly used sources in quantum optics —a coherent laser source and a dilute atomic vapour emitting spontaneously, constituting a thermal source— are presented. With easy reconfigurability and with no restriction on the total record length, APODAS can be readily used for studies over various time scales. This is demonstrated by using APODAS to reveal Rabi oscillations on nanosecond time scales in the emission of ultracold atoms, on the one hand, and, on the other hand, to measure the second-order correlation function on the millisecond time scales from tailored light sources. The efficient and versatile performance of APODAS promises its utility in diverse fields, like quantum optics, quantum communication, nuclear physics, astrophysics and biology.

  3. Handheld computers in critical care.

    PubMed

    Lapinsky, S E; Weshler, J; Mehta, S; Varkul, M; Hallett, D; Stewart, T E

    2001-08-01

    Computing technology has the potential to improve health care management but is often underutilized. Handheld computers are versatile and relatively inexpensive, bringing the benefits of computers to the bedside. We evaluated the role of this technology for managing patient data and accessing medical reference information, in an academic intensive-care unit (ICU). Palm III series handheld devices were given to the ICU team, each installed with medical reference information, schedules, and contact numbers. Users underwent a 1-hour training session introducing the hardware and software. Various patient data management applications were assessed during the study period. Qualitative assessment of the benefits, drawbacks, and suggestions was performed by an independent company, using focus groups. An objective comparison between a paper and electronic handheld textbook was achieved using clinical scenario tests. During the 6-month study period, the 20 physicians and 6 paramedical staff who used the handheld devices found them convenient and functional but suggested more comprehensive training and improved search facilities. Comparison of the handheld computer with the conventional paper text revealed equivalence. Access to computerized patient information improved communication, particularly with regard to long-stay patients, but changes to the software and the process were suggested. The introduction of this technology was well received despite differences in users' familiarity with the devices. Handheld computers have potential in the ICU, but systems need to be developed specifically for the critical-care environment.

  4. Probabilistic analysis of tsunami hazards

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2006-01-01

    Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).

  5. Handheld computers in critical care

    PubMed Central

    Lapinsky, Stephen E; Weshler, Jason; Mehta, Sangeeta; Varkul, Mark; Hallett, Dave; Stewart, Thomas E

    2001-01-01

    Background Computing technology has the potential to improve health care management but is often underutilized. Handheld computers are versatile and relatively inexpensive, bringing the benefits of computers to the bedside. We evaluated the role of this technology for managing patient data and accessing medical reference information, in an academic intensive-care unit (ICU). Methods Palm III series handheld devices were given to the ICU team, each installed with medical reference information, schedules, and contact numbers. Users underwent a 1-hour training session introducing the hardware and software. Various patient data management applications were assessed during the study period. Qualitative assessment of the benefits, drawbacks, and suggestions was performed by an independent company, using focus groups. An objective comparison between a paper and electronic handheld textbook was achieved using clinical scenario tests. Results During the 6-month study period, the 20 physicians and 6 paramedical staff who used the handheld devices found them convenient and functional but suggested more comprehensive training and improved search facilities. Comparison of the handheld computer with the conventional paper text revealed equivalence. Access to computerized patient information improved communication, particularly with regard to long-stay patients, but changes to the software and the process were suggested. Conclusions The introduction of this technology was well received despite differences in users' familiarity with the devices. Handheld computers have potential in the ICU, but systems need to be developed specifically for the critical-care environment. PMID:11511337

  6. Transforming parts of a differential equations system to difference equations as a method for run-time savings in NONMEM.

    PubMed

    Petersson, K J F; Friberg, L E; Karlsson, M O

    2010-10-01

    Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.

  7. Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data

    NASA Technical Reports Server (NTRS)

    Schairer, Edward T.

    2001-01-01

    'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.

  8. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  9. Brain Plasticity following Intensive Bimanual Therapy in Children with Hemiparesis: Preliminary Evidence

    PubMed Central

    Weinstein, Maya; Myers, Vicki; Green, Dido; Schertz, Mitchell; Shiran, Shelly I.; Geva, Ronny; Artzi, Moran; Gordon, Andrew M.; Fattal-Valevski, Aviva; Ben Bashat, Dafna

    2015-01-01

    Neuroplasticity studies examining children with hemiparesis (CH) have focused predominantly on unilateral interventions. CH also have bimanual coordination impairments with bimanual interventions showing benefits. We explored neuroplasticity following hand-arm bimanual intensive therapy (HABIT) of 60 hours in twelve CH (6 females, mean age 11 ± 3.6 y). Serial behavioral evaluations and MR imaging including diffusion tensor (DTI) and functional (fMRI) imaging were performed before, immediately after, and at 6-week follow-up. Manual skills were assessed repeatedly with the Assisting Hand Assessment, Children's Hand Experience Questionnaire, and Jebsen-Taylor Test of Hand Function. Beta values, indicating the level of activation, and lateralization index (LI), indicating the pattern of brain activation, were computed from fMRI. White matter integrity of major fibers was assessed using DTI. 11/12 children showed improvement after intervention in at least one measure, with 8/12 improving on two or more tests. Changes were retained in 6/8 children at follow-up. Beta activation in the affected hemisphere increased at follow-up, and LI increased both after intervention and at follow-up. Correlations between LI and motor function emerged after intervention. Increased white matter integrity was detected in the corpus callosum and corticospinal tract after intervention in about half of the participants. Results provide first evidence for neuroplasticity changes following bimanual intervention in CH. PMID:26640717

  10. Low-level processing for real-time image analysis

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  11. Computational wavelength resolution for in-line lensless holography: phase-coded diffraction patterns and wavefront group-sparsity

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Shevkunov, Igor; Petrov, Nikolay V.; Egiazarian, Karen

    2017-06-01

    In-line lensless holography is considered with a random phase modulation at the object plane. The forward wavefront propagation is modelled using the Fourier transform with the angular spectrum transfer function. The multiple intensities (holograms) recorded by the sensor are random due to the random phase modulation and noisy with Poissonian noise distribution. It is shown by computational experiments that high-accuracy reconstructions can be achieved with resolution going up to the two thirds of the wavelength. With respect to the sensor pixel size it is a super-resolution with a factor of 32. The algorithm designed for optimal superresolution phase/amplitude reconstruction from Poissonian data is based on the general methodology developed for phase retrieval with a pixel-wise resolution in V. Katkovnik, "Phase retrieval from noisy data based on sparse approximation of object phase and amplitude", http://www.cs.tut.fi/ lasip/DDT/index3.html.

  12. Single-image-based solution for optics temperature-dependent nonuniformity correction in an uncooled long-wave infrared camera.

    PubMed

    Cao, Yanpeng; Tisse, Christel-Loic

    2014-02-01

    In this Letter, we propose an efficient and accurate solution to remove temperature-dependent nonuniformity effects introduced by the imaging optics. This single-image-based approach computes optics-related fixed pattern noise (FPN) by fitting the derivatives of correction model to the gradient components, locally computed on an infrared image. A modified bilateral filtering algorithm is applied to local pixel output variations, so that the refined gradients are most likely caused by the nonuniformity associated with optics. The estimated bias field is subtracted from the raw infrared imagery to compensate the intensity variations caused by optics. The proposed method is fundamentally different from the existing nonuniformity correction (NUC) techniques developed for focal plane arrays (FPAs) and provides an essential image processing functionality to achieve completely shutterless NUC for uncooled long-wave infrared (LWIR) imaging systems.

  13. Molecular structure, vibrational spectral assignments (FT-IR and FT-RAMAN), NMR, NBO, HOMO-LUMO and NLO properties of O-methoxybenzaldehyde based on DFT calculations

    NASA Astrophysics Data System (ADS)

    Vennila, P.; Govindaraju, M.; Venkatesh, G.; Kamal, C.

    2016-05-01

    Fourier transform - Infra red (FT-IR) and Fourier transform - Raman (FT-Raman) spectroscopic techniques have been carried out to analyze O-methoxy benzaldehyde (OMB) molecule. The fundamental vibrational frequencies and intensity of vibrational bands were evaluated using density functional theory (DFT). The vibrational analysis of stable isomer of OMB has been carried out by FT-IR and FT-Raman in combination with theoretical method simultaneously. The first-order hyperpolarizability and the anisotropy polarizability invariant were computed by DFT method. The atomic charges, hardness, softness, ionization potential, electronegativity, HOMO-LUMO energies, and electrophilicity index have been calculated. The 13C and 1H Nuclear magnetic resonance (NMR) have also been obtained by GIAO method. Molecular electronic potential (MEP) has been calculated by the DFT calculation method. Electronic excitation energies, oscillator strength and excited states characteristics were computed by the closed-shell singlet calculation method.

  14. Prediction of serine/threonine phosphorylation sites in bacteria proteins.

    PubMed

    Li, Zhengpeng; Wu, Ping; Zhao, Yuanyuan; Liu, Zexian; Zhao, Wei

    2015-01-01

    As a critical post-translational modification, phosphorylation plays important roles in regulating various biological processes, while recent studies suggest that phosphorylation in bacteria is also critical for functional signaling transduction. Since identification of phosphorylation substrates and sites is fundamental for understanding the phosphorylation mediated regulatory mechanism, a number of studies have been contributed to this area. Since experimental identification of phosphorylation sites is time-consuming and labor-intensive, computational predictions attract much attention for its convenience to provide helpful information. However, although there are a large number of computational studies in eukaryotes, predictions in bacteria are still rare. In this study, we present a new predictor of cPhosBac to predict phosphorylation serine/threonine in bacteria proteins. The predictor is developed with CKSAAP algorithm, which was combined with motif length selection to optimize the prediction, which achieves promising performance. The online service of cPhosBac is available at: http://netalign.ustc.edu.cn/cphosbac/ .

  15. A unified account of gloss and lightness perception in terms of gamut relativity.

    PubMed

    Vladusich, Tony

    2013-08-01

    A recently introduced computational theory of visual surface representation, termed gamut relativity, overturns the classical assumption that brightness, lightness, and transparency constitute perceptual dimensions corresponding to the physical dimensions of luminance, diffuse reflectance, and transmittance, respectively. Here I extend the theory to show how surface gloss and lightness can be understood in a unified manner in terms of the vector computation of "layered representations" of surface and illumination properties, rather than as perceptual dimensions corresponding to diffuse and specular reflectance, respectively. The theory simulates the effects of image histogram skewness on surface gloss/lightness and lightness constancy as a function of specular highlight intensity. More generally, gamut relativity clarifies, unifies, and generalizes a wide body of previous theoretical and experimental work aimed at understanding how the visual system parses the retinal image into layered representations of surface and illumination properties.

  16. Reading data stored in the state of metastable defects in silicon using band-band photoluminescence: Proof of concept and physical limits to the data storage density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rougieux, F. E.; Macdonald, D.

    2014-03-24

    The state of bistable defects in crystalline silicon such as iron-boron pairs or the boron-oxygen defect can be changed at room temperature. In this letter, we experimentally demonstrate that the chemical state of a group of defects can be changed to represent a bit of information. The state can then be read without direct contact via the intensity of the emitted band-band photoluminescence signal of the group of defects, via their impact on the carrier lifetime. The theoretical limit of the information density is then computed. The information density is shown to be low for two-dimensional storage but significant formore » three-dimensional data storage. Finally, we compute the maximum storage capacity as a function of the lower limit of the photoluminescence detector sensitivity.« less

  17. Parent-child attitude congruence on type and intensity of physical activity: testing multiple mediators of sedentary behavior in older children.

    PubMed

    Anderson, Cheryl B; Hughes, Sheryl O; Fuemmeler, Bernard F

    2009-07-01

    This study examined parent-child attitudes on value of specific types and intensities of physical activity, which may explain gender differences in child activity, and evaluated physical activity as a mechanism to reduce time spent in sedentary behaviors. A community sample of 681 parents and 433 children (mean age 9.9 years) reported attitudes on importance of vigorous and moderate intensity team and individually performed sports/activities, as well as household chores. Separate structural models (LISREL 8.7) for girls and boys tested whether parental attitudes were related to child TV and computer via child attitudes, sport team participation, and physical activity, controlling for demographic factors. Child 7-day physical activity, sport teams, weekly TV, computer. Parent-child attitude congruence was more prevalent among boys, and attitudes varied by ethnicity, parent education, and number of children. Positive parent-child attitudes for vigorous team sports were related to increased team participation and physical activity, as well as reduced TV and computer in boys and girls. Value of moderate intensity household chores, such as cleaning house and doing laundry, was related to decreased team participation and increased TV in boys. Only organized team sports, not general physical activity, was related to reduced TV and computer. Results support parents' role in socializing children's achievement task values, affecting child activity by transferring specific attitudes. Value of vigorous intensity sports provided the most benefits to activity and reduction of sedentary behavior, while valuing household chores had unexpected negative effects.

  18. Microgravity

    NASA Image and Video Library

    2001-06-06

    X-rays diffracted from a well-ordered protein crystal create sharp patterns of scattered light on film. A computer can use these patterns to generate a model of a protein molecule. To analyze the selected crystal, an X-ray crystallographer shines X-rays through the crystal. Unlike a single dental X-ray, which produces a shadow image of a tooth, these X-rays have to be taken many times from different angles to produce a pattern from the scattered light, a map of the intensity of the X-rays after they diffract through the crystal. The X-rays bounce off the electron clouds that form the outer structure of each atom. A flawed crystal will yield a blurry pattern; a well-ordered protein crystal yields a series of sharp diffraction patterns. From these patterns, researchers build an electron density map. With powerful computers and a lot of calculations, scientists can use the electron density patterns to determine the structure of the protein and make a computer-generated model of the structure. The models let researchers improve their understanding of how the protein functions. They also allow scientists to look for receptor sites and active areas that control a protein's function and role in the progress of diseases. From there, pharmaceutical researchers can design molecules that fit the active site, much like a key and lock, so that the protein is locked without affecting the rest of the body. This is called structure-based drug design.

  19. FT-IR, FT-Raman, NMR spectra, density functional computations of the vibrational assignments (for monomer and dimer) and molecular geometry of anticancer drug 7-amino-2-methylchromone

    NASA Astrophysics Data System (ADS)

    Mariappan, G.; Sundaraganesan, N.

    2014-04-01

    Vibrational assignments for the 7-amino-2-methylchromone (abbreviated as 7A2MC) molecule using a combination of experimental vibrational spectroscopic measurements and ab initio computational methods are reported. The optimized geometry, intermolecular hydrogen bonding, first order hyperpolarizability and harmonic vibrational wavenumbers of 7A2MC have been investigated with the help of B3LYP density functional theory method. The calculated molecular geometry parameters, the theoretically computed vibrational frequencies for monomer and dimer and relative peak intensities were compared with experimental data. DFT calculations using the B3LYP method and 6-31 + G(d,p) basis set were found to yield results that are very comparable to experimental IR and Raman spectra. Detailed vibrational assignments were performed with DFT calculations and the potential energy distribution (PED) obtained from the Vibrational Energy Distribution Analysis (VEDA) program. Natural Bond Orbital (NBO) study revealed the characteristics of the electronic delocalization of the molecular structure. 13C and 1H NMR spectra have been recorded and 13C and 1H nuclear magnetic resonance chemical shifts of the molecule have been calculated using the gauge independent atomic orbital (GIAO) method. Furthermore, All the possible calculated values are analyzed using correlation coefficients linear fitting equation and are shown strong correlation with the experimental data.

  20. Integrating Computing across the Curriculum: The Impact of Internal Barriers and Training Intensity on Computer Integration in the Elementary School Classroom

    ERIC Educational Resources Information Center

    Coleman, LaToya O.; Gibson, Philip; Cotten, Shelia R.; Howell-Moroney, Michael; Stringer, Kristi

    2016-01-01

    This study examines the relationship between internal barriers, professional development, and computer integration outcomes among a sample of fourth- and fifth-grade teachers in an urban, low-income school district in the Southeastern United States. Specifically, we examine the impact of teachers' computer attitudes, computer anxiety, and computer…

  1. Overview 1993: Computational applications

    NASA Technical Reports Server (NTRS)

    Benek, John A.

    1993-01-01

    Computational applications include projects that apply or develop computationally intensive computer programs. Such programs typically require supercomputers to obtain solutions in a timely fashion. This report describes two CSTAR projects involving Computational Fluid Dynamics (CFD) technology. The first, the Parallel Processing Initiative, is a joint development effort and the second, the Chimera Technology Development, is a transfer of government developed technology to American industry.

  2. Integrated beam orientation and scanning-spot optimization in intensity-modulated proton therapy for brain and unilateral head and neck tumors.

    PubMed

    Gu, Wenbo; O'Connor, Daniel; Nguyen, Dan; Yu, Victoria Y; Ruan, Dan; Dong, Lei; Sheng, Ke

    2018-04-01

    Intensity-Modulated Proton Therapy (IMPT) is the state-of-the-art method of delivering proton radiotherapy. Previous research has been mainly focused on optimization of scanning spots with manually selected beam angles. Due to the computational complexity, the potential benefit of simultaneously optimizing beam orientations and spot pattern could not be realized. In this study, we developed a novel integrated beam orientation optimization (BOO) and scanning-spot optimization algorithm for intensity-modulated proton therapy (IMPT). A brain chordoma and three unilateral head-and-neck patients with a maximal target size of 112.49 cm 3 were included in this study. A total number of 1162 noncoplanar candidate beams evenly distributed across 4π steradians were included in the optimization. For each candidate beam, the pencil-beam doses of all scanning spots covering the PTV and a margin were calculated. The beam angle selection and spot intensity optimization problem was formulated to include three terms: a dose fidelity term to penalize the deviation of PTV and OAR doses from ideal dose distribution; an L1-norm sparsity term to reduce the number of active spots and improve delivery efficiency; a group sparsity term to control the number of active beams between 2 and 4. For the group sparsity term, convex L2,1-norm and nonconvex L2,1/2-norm were tested. For the dose fidelity term, both quadratic function and linearized equivalent uniform dose (LEUD) cost function were implemented. The optimization problem was solved using the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The IMPT BOO method was tested on three head-and-neck patients and one skull base chordoma patient. The results were compared with IMPT plans created using column generation selected beams or manually selected beams. The L2,1-norm plan selected spatially aggregated beams, indicating potential degeneracy using this norm. L2,1/2-norm was able to select spatially separated beams and achieve smaller deviation from the ideal dose. In the L2,1/2-norm plans, the [mean dose, maximum dose] of OAR were reduced by an average of [2.38%, 4.24%] and[2.32%, 3.76%] of the prescription dose for the quadratic and LEUD cost function, respectively, compared with the IMPT plan using manual beam selection while maintaining the same PTV coverage. The L2,1/2 group sparsity plans were dosimetrically superior to the column generation plans as well. Besides beam orientation selection, spot sparsification was observed. Generally, with the quadratic cost function, 30%~60% spots in the selected beams remained active. With the LEUD cost function, the percentages of active spots were in the range of 35%~85%.The BOO-IMPT run time was approximately 20 min. This work shows the first IMPT approach integrating noncoplanar BOO and scanning-spot optimization in a single mathematical framework. This method is computationally efficient, dosimetrically superior and produces delivery-friendly IMPT plans. © 2018 American Association of Physicists in Medicine.

  3. Evaluating virtual hosted desktops for graphics-intensive astronomy

    NASA Astrophysics Data System (ADS)

    Meade, B. F.; Fluke, C. J.

    2018-04-01

    Visualisation of data is critical to understanding astronomical phenomena. Today, many instruments produce datasets that are too big to be downloaded to a local computer, yet many of the visualisation tools used by astronomers are deployed only on desktop computers. Cloud computing is increasingly used to provide a computation and simulation platform in astronomy, but it also offers great potential as a visualisation platform. Virtual hosted desktops, with graphics processing unit (GPU) acceleration, allow interactive, graphics-intensive desktop applications to operate co-located with astronomy datasets stored in remote data centres. By combining benchmarking and user experience testing, with a cohort of 20 astronomers, we investigate the viability of replacing physical desktop computers with virtual hosted desktops. In our work, we compare two Apple MacBook computers (one old and one new, representing hardware and opposite ends of the useful lifetime) with two virtual hosted desktops: one commercial (Amazon Web Services) and one in a private research cloud (the Australian NeCTAR Research Cloud). For two-dimensional image-based tasks and graphics-intensive three-dimensional operations - typical of astronomy visualisation workflows - we found that benchmarks do not necessarily provide the best indication of performance. When compared to typical laptop computers, virtual hosted desktops can provide a better user experience, even with lower performing graphics cards. We also found that virtual hosted desktops are equally simple to use, provide greater flexibility in choice of configuration, and may actually be a more cost-effective option for typical usage profiles.

  4. Age-related differences in muscle fatigue vary by contraction type: a meta-analysis.

    PubMed

    Avin, Keith G; Law, Laura A Frey

    2011-08-01

    During senescence, despite the loss of strength (force-generating capability) associated with sarcopenia, muscle endurance may improve for isometric contractions. The purpose of this study was to perform a systematic meta-analysis of young versus older adults, considering likely moderators (ie, contraction type, joint, sex, activity level, and task intensity). A 2-stage systematic review identified potential studies from PubMed, CINAHL, PEDro, EBSCOhost: ERIC, EBSCOhost: Sportdiscus, and The Cochrane Library. Studies reporting fatigue tasks (voluntary activation) performed at a relative intensity in both young (18-45 years of age) and old (≥ 55 years of age) adults who were healthy were considered. Sample size, mean and variance outcome data (ie, fatigue index or endurance time), joint, contraction type, task intensity (percentage of maximum), sex, and activity levels were extracted. Effect sizes were (1) computed for all data points; (2) subgrouped by contraction type, sex, joint or muscle group, intensity, or activity level; and (3) further subgrouped between contraction type and the remaining moderators. Out of 3,457 potential studies, 46 publications (with 78 distinct effect size data points) met all inclusion criteria. A lack of available data limited subgroup analyses (ie, sex, intensity, joint), as did a disproportionate spread of data (most intensities ≥ 50% of maximum voluntary contraction). Overall, older adults were able to sustain relative-intensity tasks significantly longer or with less force decay than younger adults (effect size=0.49). However, this age-related difference was present only for sustained and intermittent isometric contractions, whereas this age-related advantage was lost for dynamic tasks. When controlling for contraction type, the additional modifiers played minor roles. Identifying muscle endurance capabilities in the older adult may provide an avenue to improve functional capabilities, despite a clearly established decrement in peak torque.

  5. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  6. Information granules in image histogram analysis.

    PubMed

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Potential energy surface, dipole moment surface and the intensity calculations for the 10 μm, 5 μm and 3 μm bands of ozone

    NASA Astrophysics Data System (ADS)

    Polyansky, Oleg L.; Zobov, Nikolai F.; Mizus, Irina I.; Kyuberis, Aleksandra A.; Lodi, Lorenzo; Tennyson, Jonathan

    2018-05-01

    Monitoring ozone concentrations in the Earth's atmosphere using spectroscopic methods is a major activity which undertaken both from the ground and from space. However there are long-running issues of consistency between measurements made at infrared (IR) and ultraviolet (UV) wavelengths. In addition, key O3 IR bands at 10 μm, 5 μm and 3 μm also yield results which differ by a few percent when used for retrievals. These problems stem from the underlying laboratory measurements of the line intensities. Here we use quantum chemical techniques, first principles electronic structure and variational nuclear-motion calculations, to address this problem. A new high-accuracy ab initio dipole moment surface (DMS) is computed. Several spectroscopically-determined potential energy surfaces (PESs) are constructed by fitting to empirical energy levels in the region below 7000 cm-1 starting from an ab initio PES. Nuclear motion calculations using these new surfaces allow the unambiguous determination of the intensities of 10 μm band transitions, and the computation of the intensities of 10 μm and 5 μm bands within their experimental error. A decrease in intensities within the 3 μm is predicted which appears consistent with atmospheric retrievals. The PES and DMS form a suitable starting point both for the computation of comprehensive ozone line lists and for future calculations of electronic transition intensities.

  8. Real-time non-rigid target tracking for ultrasound-guided clinical interventions

    NASA Astrophysics Data System (ADS)

    Zachiu, C.; Ries, M.; Ramaekers, P.; Guey, J.-L.; Moonen, C. T. W.; de Senneville, B. Denis

    2017-10-01

    Biological motion is a problem for non- or mini-invasive interventions when conducted in mobile/deformable organs due to the targeted pathology moving/deforming with the organ. This may lead to high miss rates and/or incomplete treatment of the pathology. Therefore, real-time tracking of the target anatomy during the intervention would be beneficial for such applications. Since the aforementioned interventions are often conducted under B-mode ultrasound (US) guidance, target tracking can be achieved via image registration, by comparing the acquired US images to a separate image established as positional reference. However, such US images are intrinsically altered by speckle noise, introducing incoherent gray-level intensity variations. This may prove problematic for existing intensity-based registration methods. In the current study we address US-based target tracking by employing the recently proposed EVolution registration algorithm. The method is, by construction, robust to transient gray-level intensities. Instead of directly matching image intensities, EVolution aligns similar contrast patterns in the images. Moreover, the displacement is computed by evaluating a matching criterion for image sub-regions rather than on a point-by-point basis, which typically provides more robust motion estimates. However, unlike similar previously published approaches, which assume rigid displacements in the image sub-regions, the EVolution algorithm integrates the matching criterion in a global functional, allowing the estimation of an elastic dense deformation. The approach was validated for soft tissue tracking under free-breathing conditions on the abdomen of seven healthy volunteers. Contact echography was performed on all volunteers, while three of the volunteers also underwent standoff echography. Each of the two modalities is predominantly specific to a particular type of non- or mini-invasive clinical intervention. The method demonstrated on average an accuracy of  ˜1.5 mm and submillimeter precision. This, together with a computational performance of 20 images per second make the proposed method an attractive solution for real-time target tracking during US-guided clinical interventions.

  9. Real-time non-rigid target tracking for ultrasound-guided clinical interventions.

    PubMed

    Zachiu, C; Ries, M; Ramaekers, P; Guey, J-L; Moonen, C T W; de Senneville, B Denis

    2017-10-04

    Biological motion is a problem for non- or mini-invasive interventions when conducted in mobile/deformable organs due to the targeted pathology moving/deforming with the organ. This may lead to high miss rates and/or incomplete treatment of the pathology. Therefore, real-time tracking of the target anatomy during the intervention would be beneficial for such applications. Since the aforementioned interventions are often conducted under B-mode ultrasound (US) guidance, target tracking can be achieved via image registration, by comparing the acquired US images to a separate image established as positional reference. However, such US images are intrinsically altered by speckle noise, introducing incoherent gray-level intensity variations. This may prove problematic for existing intensity-based registration methods. In the current study we address US-based target tracking by employing the recently proposed EVolution registration algorithm. The method is, by construction, robust to transient gray-level intensities. Instead of directly matching image intensities, EVolution aligns similar contrast patterns in the images. Moreover, the displacement is computed by evaluating a matching criterion for image sub-regions rather than on a point-by-point basis, which typically provides more robust motion estimates. However, unlike similar previously published approaches, which assume rigid displacements in the image sub-regions, the EVolution algorithm integrates the matching criterion in a global functional, allowing the estimation of an elastic dense deformation. The approach was validated for soft tissue tracking under free-breathing conditions on the abdomen of seven healthy volunteers. Contact echography was performed on all volunteers, while three of the volunteers also underwent standoff echography. Each of the two modalities is predominantly specific to a particular type of non- or mini-invasive clinical intervention. The method demonstrated on average an accuracy of  ∼1.5 mm and submillimeter precision. This, together with a computational performance of 20 images per second make the proposed method an attractive solution for real-time target tracking during US-guided clinical interventions.

  10. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  11. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    PubMed Central

    Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation. PMID:26681933

  12. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    PubMed

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  13. Computational laser intensity stabilisation for organic molecule concentration estimation in low-resource settings

    NASA Astrophysics Data System (ADS)

    Haider, Shahid A.; Kazemzadeh, Farnoud; Wong, Alexander

    2017-03-01

    An ideal laser is a useful tool for the analysis of biological systems. In particular, the polarization property of lasers can allow for the concentration of important organic molecules in the human body, such as proteins, amino acids, lipids, and carbohydrates, to be estimated. However, lasers do not always work as intended and there can be effects such as mode hopping and thermal drift that can cause time-varying intensity fluctuations. The causes of these effects can be from the surrounding environment, where either an unstable current source is used or the temperature of the surrounding environment is not temporally stable. This intensity fluctuation can cause bias and error in typical organic molecule concentration estimation techniques. In a low-resource setting where cost must be limited and where environmental factors, like unregulated power supplies and temperature, cannot be controlled, the hardware required to correct for these intensity fluctuations can be prohibitive. We propose a method for computational laser intensity stabilisation that uses Bayesian state estimation to correct for the time-varying intensity fluctuations from electrical and thermal instabilities without the use of additional hardware. This method will allow for consistent intensities across all polarization measurements for accurate estimates of organic molecule concentrations.

  14. Effect of yoga on musculoskeletal discomfort and motor functions in professional computer users.

    PubMed

    Telles, Shirley; Dash, Manoj; Naveen, K V

    2009-01-01

    The self-rated musculoskeletal discomfort, hand grip strength, tapping speed, and low back and hamstring flexibility (based on a sit and reach task) were assessed in 291 professional computer users. They were then randomized as Yoga (YG; n=146) and Wait-list control (WL; n=145) groups. Follow-up assessments for both groups were after 60 days during which the YG group practiced yoga for 60 minutes daily, for 5 days in a week. The WL group spent the same time in their usual recreational activities. At the end of 60 days, the YG group (n=62) showed a significant decrease in the frequency, intensity and degree of interference due to musculoskeletal discomfort, an increase in bilateral hand grip strength, the right hand tapping speed, and low back and hamstring flexibility (repeated measures ANOVA and post hoc analysis with Bonferroni adjustment). In contrast, the WL group (n=56) showed an increase in musculoskeletal discomfort and a decrease in left hand tapping speed. The results suggest that yoga practice is a useful addition to the routine of professional computer users.

  15. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  16. A model of proto-object based saliency

    PubMed Central

    Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph

    2013-01-01

    Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601

  17. Is Model-Based Development a Favorable Approach for Complex and Safety-Critical Computer Systems on Commercial Aircraft?

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    A system is safety-critical if its failure can endanger human life or cause significant damage to property or the environment. State-of-the-art computer systems on commercial aircraft are highly complex, software-intensive, functionally integrated, and network-centric systems of systems. Ensuring that such systems are safe and comply with existing safety regulations is costly and time-consuming as the level of rigor in the development process, especially the validation and verification activities, is determined by considerations of system complexity and safety criticality. A significant degree of care and deep insight into the operational principles of these systems is required to ensure adequate coverage of all design implications relevant to system safety. Model-based development methodologies, methods, tools, and techniques facilitate collaboration and enable the use of common design artifacts among groups dealing with different aspects of the development of a system. This paper examines the application of model-based development to complex and safety-critical aircraft computer systems. Benefits and detriments are identified and an overall assessment of the approach is given.

  18. Two-dimensional nonsteady viscous flow simulation on the Navier-Stokes computer miniNode

    NASA Technical Reports Server (NTRS)

    Nosenchuck, Daniel M.; Littman, Michael G.; Flannery, William

    1986-01-01

    The needs of large-scale scientific computation are outpacing the growth in performance of mainframe supercomputers. In particular, problems in fluid mechanics involving complex flow simulations require far more speed and capacity than that provided by current and proposed Class VI supercomputers. To address this concern, the Navier-Stokes Computer (NSC) was developed. The NSC is a parallel-processing machine, comprised of individual Nodes, each comparable in performance to current supercomputers. The global architecture is that of a hypercube, and a 128-Node NSC has been designed. New architectural features, such as a reconfigurable many-function ALU pipeline and a multifunction memory-ALU switch, have provided the capability to efficiently implement a wide range of algorithms. Efficient algorithms typically involve numerically intensive tasks, which often include conditional operations. These operations may be efficiently implemented on the NSC without, in general, sacrificing vector-processing speed. To illustrate the architecture, programming, and several of the capabilities of the NSC, the simulation of two-dimensional, nonsteady viscous flows on a prototype Node, called the miniNode, is presented.

  19. Computer Series, 98. Electronics for Scientists: A Computer-Intensive Approach.

    ERIC Educational Resources Information Center

    Scheeline, Alexander; Mork, Brian J.

    1988-01-01

    Reports the design for a principles-before-details presentation of electronics for an instrumental analysis class. Uses computers for data collection and simulations. Requires one semester with two 2.5-hour periods and two lectures per week. Includes lab and lecture syllabi. (MVL)

  20. Surface-Constrained Volumetric Brain Registration Using Harmonic Mappings

    PubMed Central

    Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.

    2015-01-01

    In order to compare anatomical and functional brain imaging data across subjects, the images must first be registered to a common coordinate system in which anatomical features are aligned. Intensity-based volume registration methods can align subcortical structures well, but the variability in sulcal folding patterns typically results in misalignment of the cortical surface. Conversely, surface-based registration using sulcal features can produce excellent cortical alignment but the mapping between brains is restricted to the cortical surface. Here we describe a method for volumetric registration that also produces an accurate one-to-one point correspondence between cortical surfaces. This is achieved by first parameterizing and aligning the cortical surfaces using sulcal landmarks. We then use a constrained harmonic mapping to extend this surface correspondence to the entire cortical volume. Finally, this mapping is refined using an intensity-based warp. We demonstrate the utility of the method by applying it to T1-weighted magnetic resonance images (MRI). We evaluate the performance of our proposed method relative to existing methods that use only intensity information; for this comparison we compute the inter-subject alignment of expert-labeled sub-cortical structures after registration. PMID:18092736

  1. Geometry and Raman spectra of P.R. 255 and its furo-furanone analogue

    NASA Astrophysics Data System (ADS)

    Luňák, Stanislav, Jr.; Frumarová, Božena; Vyňuchal, Jan; Hrdina, Radim

    2009-05-01

    Fourier transform Raman spectra of two π-isoelectronic compounds 3,6-diphenyl-2,5-dihydro-pyrrolo-[3,4-c]pyrrole-1,4-dione (BPPB, C.I. Pigment Red 255) and 3,6-diphenyl-2,5-dihydro-furo-[3,4-c]furanone (BFFB) with the same 1,4-diphenyl-buta-1,3-diene (DPB) backbone were first time measured in polycrystalline phase. The ground state geometry and vibrational frequencies together with Raman intensities were computed by density functional theory (DFT: B3LYP/6-311G++(d,p)). All intensive observed Raman frequencies were identified as totally symmetric. The difference of carbon-carbon bond lengths of BPPB and BFFB compared to DPB, relating very well with the shifts of C dbnd C and C-C stretching modes frequencies, was explained by aromatization of central butadiene unit bounded in diketo-pyrrolo-pyrrole and furo-furanone heterocycles. A strong coupling of modes was observed for BFFB enhancing selectively the intensity of one peak 1593 cm -1 in C dbnd C stretching region and one peak 1372 cm -1 in C-C stretching region. C dbnd O stretching and N-H bending modes of BPPB are significantly affected by intermolecular hydrogen bonding.

  2. Effects of cross correlation on the relaxation time of a bistable system driven by cross-correlated noise

    NASA Astrophysics Data System (ADS)

    Mei, Dongcheng; Xie, Chongwei; Zhang, Li

    2003-11-01

    We study the effects of correlations between additive and multiplicative noise on relaxation time in a bistable system driven by cross-correlated noise. Using the projection-operator method, we derived an analytic expression for the relaxation time Tc of the system, which is the function of additive (α) and multiplicative (D) noise intensities, correlation intensity λ of noise, and correlation time τ of noise. After introducing a noise intensity ratio and a dimensionless parameter R=D/α, and then performing numerical computations, we find the following: (i) For the case of R<1, the relaxation time Tc increases as R increases. (ii) For the cases of R⩾1, there is a one-peak structure on the Tc-R plot and the effects of cross-correlated noise on the relaxation time are very notable. (iii) For the case of R<1, Tc almost does not change with both λ and τ, and for the cases of R⩾1, Tc decreases as λ increases, however Tc increases as τ increases. λ and τ play opposite roles in Tc, i.e., λ enhances the fluctuation decay of dynamical variable and τ slows down the fluctuation decay of dynamical variable.

  3. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    NASA Astrophysics Data System (ADS)

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  4. Intrinsic Functional Connectivity in the Adult Brain and Success in Second-Language Learning.

    PubMed

    Chai, Xiaoqian J; Berken, Jonathan A; Barbeau, Elise B; Soles, Jennika; Callahan, Megan; Chen, Jen-Kai; Klein, Denise

    2016-01-20

    There is considerable variability in an individual's ability to acquire a second language (L2) during adulthood. Using resting-state fMRI data acquired before training in English speakers who underwent a 12 week intensive French immersion training course, we investigated whether individual differences in intrinsic resting-state functional connectivity relate to a person's ability to acquire an L2. We focused on two key aspects of language processing--lexical retrieval in spontaneous speech and reading speed--and computed whole-brain functional connectivity from two regions of interest in the language network, namely the left anterior insula/frontal operculum (AI/FO) and the visual word form area (VWFA). Connectivity between the left AI/FO and left posterior superior temporal gyrus (STG) and between the left AI/FO and dorsal anterior cingulate cortex correlated positively with improvement in L2 lexical retrieval in spontaneous speech. Connectivity between the VWFA and left mid-STG correlated positively with improvement in L2 reading speed. These findings are consistent with the different language functions subserved by subcomponents of the language network and suggest that the human capacity to learn an L2 can be predicted by an individual's intrinsic functional connectivity within the language network. Significance statement: There is considerable variability in second-language learning abilities during adulthood. We investigated whether individual differences in intrinsic functional connectivity in the adult brain relate to success in second-language learning, using resting-state functional magnetic resonance imaging in English speakers who underwent a 12 week intensive French immersion training course. We found that pretraining functional connectivity within two different language subnetworks correlated strongly with learning outcome in two different language skills: lexical retrieval in spontaneous speech and reading speed. Our results suggest that the human capacity to learn a second language can be predicted by an individual's intrinsic functional connectivity within the language network. Copyright © 2016 the authors 0270-6474/16/360755-07$15.00/0.

  5. Computing Models for FPGA-Based Accelerators

    PubMed Central

    Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt

    2011-01-01

    Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152

  6. The Effects of Computer Assisted English Instruction on High School Preparatory Students' Attitudes towards Computers and English

    ERIC Educational Resources Information Center

    Ates, Alev; Altunay, Ugur; Altun, Eralp

    2006-01-01

    The aim of this research was to discern the effects of computer assisted English instruction on English language preparatory students' attitudes towards computers and English in a Turkish-medium high school with an intensive English program. A quasi-experimental time series research design, also called "before-after" or "repeated…

  7. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  8. Molecular conformational analysis, vibrational spectra and normal coordinate analysis of trans-1,2-bis(3,5-dimethoxy phenyl)-ethene based on density functional theory calculations.

    PubMed

    Joseph, Lynnette; Sajan, D; Chaitanya, K; Isac, Jayakumary

    2014-03-25

    The conformational behavior and structural stability of trans-1,2-bis(3,5-dimethoxy phenyl)-ethene (TDBE) were investigated by using density functional theory (DFT) method with the B3LYP/6-311++G(d,p) basis set combination. The vibrational wavenumbers of TDBE were computed at DFT level and complete vibrational assignments were made on the basis of normal coordinate analysis calculations (NCA). The DFT force field transformed to natural internal coordinates was corrected by a well-established set of scale factors that were found to be transferable to the title compound. The infrared and Raman spectra were also predicted from the calculated intensities. The observed Fourier transform infrared (FTIR) and Fourier transform (FT) Raman vibrational wavenumbers were analyzed and compared with the theoretically predicted vibrational spectra. Comparison of the simulated spectra with the experimental spectra provides important information about the ability of the computational method to describe the vibrational modes. Information about the size, shape, charge density distribution and site of chemical reactivity of the molecules has been obtained by mapping electron density isosurface with electrostatic potential surfaces (ESP). Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Adding a solar-radiance function to the Hošek-Wilkie skylight model.

    PubMed

    Hošek, Lukáš; Wilkie, Alexander

    2013-01-01

    One prerequisite for realistic renderings of outdoor scenes is the proper capturing of the sky's appearance. Currently, an explicit simulation of light scattering in the atmosphere isn't computationally feasible, and won't be in the foreseeable future. Captured luminance patterns have proven their usefulness in practice but can't meet all user needs. To fill this capability gap, computer graphics technology has employed analytical models of sky-dome luminance patterns for more than two decades. For technical reasons, such models deal with only the sky dome's appearance, though, and exclude the solar disc. The widely used model proposed by Arcot Preetham and colleagues employed a separately derived analytical formula for adding a solar emitter of suitable radiant intensity. Although this yields reasonable results, the formula is derived in a manner that doesn't exactly match the conditions in their sky-dome model. But the more sophisticated a skylight model is and the more subtly it can represent different conditions, the more the solar radiance should exactly match the skylight's conditions. Toward that end, researchers propose a solar-radiance function that exactly matches a recently published high-quality analytical skylight model.

  10. A computational protocol for the study of circularly polarized phosphorescence and circular dichroism in spin-forbidden absorption.

    PubMed

    Kamiński, Maciej; Cukras, Janusz; Pecul, Magdalena; Rizzo, Antonio; Coriani, Sonia

    2015-07-15

    We present a computational methodology to calculate the intensity of circular dichroism (CD) in spin-forbidden absorption and of circularly polarized phosphorescence (CPP) signals, a manifestation of the optical activity of the triplet-singlet transitions in chiral compounds. The protocol is based on the response function formalism and is implemented at the level of time-dependent density functional theory. It has been employed to calculate the spin-forbidden circular dichroism and circularly polarized phosphorescence signals of valence n → π* and n ← π* transitions, respectively, in several chiral enones and diketones. Basis set effects in the length and velocity gauge formulations have been explored, and the accuracy achieved when employing approximate (mean-field and effective nuclear charge) spin-orbit operators has been investigated. CPP is shown to be a sensitive probe of the triplet excited state structure. In many cases the sign of the spin-forbidden CD and CPP signals are opposite. For the β,γ-enones under investigation, where there are two minima on the lowest triplet excited state potential energy surface, each minimum exhibits a CPP signal of a different sign.

  11. Vibrational spectroscopy and theoretical studies on 2,4-dinitrophenylhydrazine

    NASA Astrophysics Data System (ADS)

    Chiş, V.; Filip, S.; Miclăuş, V.; Pîrnău, A.; Tănăselia, C.; Almăşan, V.; Vasilescu, M.

    2005-06-01

    In this work, we will report a combined experimental and theoretical study on molecular and vibrational structure of 2,4-dinitrophenylhydrazine. FT-IR, FT-IR/ATR and Raman spectra of normal and deuterated DNPH have been recorded and analyzed in order to get new insights into molecular structure and properties of this molecule, with particular emphasize on its intra- and intermolecular hydrogen bonds (HB's). For computational purposes we used density functional theory (DFT) methods, with B3LYP and BLYP exchange-correlation functionals, in conjunction with 6-31G(d) basis set. All experimental vibrational bands have been discussed and assigned to normal modes on the basis of DFT calculations and isotopic shifts and by comparison to other dinitro- substituted compounds [V. Chiş, Chem. Phys., 300 (2004) 1]. To aid in mode assignments, we based on the direct comparison between experimental and calculated spectra by considering both the frequency sequence and the intensity pattern of the experimental and computed vibrational bands. It is also shown that semiempirical AM1 method predicts geometrical parameters and vibrational frequencies related to the HB in a pleasant agreement with experiment, being surprisingly accurate from this perspective.

  12. Rapid prototyping and evaluation of programmable SIMD SDR processors in LISA

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Liu, Hengzhu; Zhang, Botao; Liu, Dongpei

    2013-03-01

    With the development of international wireless communication standards, there is an increase in computational requirement for baseband signal processors. Time-to-market pressure makes it impossible to completely redesign new processors for the evolving standards. Due to its high flexibility and low power, software defined radio (SDR) digital signal processors have been proposed as promising technology to replace traditional ASIC and FPGA fashions. In addition, there are large numbers of parallel data processed in computation-intensive functions, which fosters the development of single instruction multiple data (SIMD) architecture in SDR platform. So a new way must be found to prototype the SDR processors efficiently. In this paper we present a bit-and-cycle accurate model of programmable SIMD SDR processors in a machine description language LISA. LISA is a language for instruction set architecture which can gain rapid model at architectural level. In order to evaluate the availability of our proposed processor, three common baseband functions, FFT, FIR digital filter and matrix multiplication have been mapped on the SDR platform. Analytical results showed that the SDR processor achieved the maximum of 47.1% performance boost relative to the opponent processor.

  13. The Prediction and Analysis of Jet Flows and Scattered Turbulent Mixing Noise about Flight Vehicle Airframes

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2014-01-01

    Jet flows interacting with nearby surfaces exhibit a complex behavior in which acoustic and aerodynamic characteristics are altered. The physical understanding and prediction of these characteristics are essential to designing future low noise aircraft. A new approach is created for predicting scattered jet mixing noise that utilizes an acoustic analogy and steady Reynolds-averaged Navier-Stokes solutions. A tailored Green's function accounts for the propagation of mixing noise about the airframe and is calculated numerically using a newly developed ray tracing method. The steady aerodynamic statistics, associated unsteady sound source, and acoustic intensity are examined as jet conditions are varied about a large flat plate. A non-dimensional number is proposed to estimate the effect of the aerodynamic noise source relative to jet operating condition and airframe position.The steady Reynolds-averaged Navier-Stokes solutions, acoustic analogy, tailored Green's function, non-dimensional number, and predicted noise are validated with a wide variety of measurements. The combination of the developed theory, ray tracing method, and careful implementation in a stand-alone computer program result in an approach that is more first principles oriented than alternatives, computationally efficient, and captures the relevant physics of fluid-structure interaction.

  14. The Prediction and Analysis of Jet Flows and Scattered Turbulent Mixing Noise About Flight Vehicle Airframes

    NASA Technical Reports Server (NTRS)

    Miller, Steven A.

    2014-01-01

    Jet flows interacting with nearby surfaces exhibit a complex behavior in which acoustic and aerodynamic characteristics are altered. The physical understanding and prediction of these characteristics are essential to designing future low noise aircraft. A new approach is created for predicting scattered jet mixing noise that utilizes an acoustic analogy and steady Reynolds-averaged Navier-Stokes solutions. A tailored Green's function accounts for the propagation of mixing noise about the air-frame and is calculated numerically using a newly developed ray tracing method. The steady aerodynamic statistics, associated unsteady sound source, and acoustic intensity are examined as jet conditions are varied about a large at plate. A non-dimensional number is proposed to estimate the effect of the aerodynamic noise source relative to jet operating condition and airframe position. The steady Reynolds-averaged Navier-Stokes solutions, acoustic analogy, tailored Green's function, non- dimensional number, and predicted noise are validated with a wide variety of measurements. The combination of the developed theory, ray tracing method, and careful implementation in a stand-alone computer program result in an approach that is more first principles oriented than alternatives, computationally efficient, and captures the relevant physics of fluid-structure interaction.

  15. pH during non-synaptic epileptiform activity-computational simulations.

    PubMed

    Rodrigues, Antônio Márcio; Santos, Luiz Eduardo Canton; Covolan, Luciene; Hamani, Clement; de Almeida, Antônio-Carlos Guimarães

    2015-09-02

    The excitability of neuronal networks is strongly modulated by changes in pH. The origin of these changes, however, is still under debate. The high complexity of neural systems justifies the use of computational simulation to investigate mechanisms that are possibly involved. Simulated neuronal activity includes non-synaptic epileptiform events (NEA) induced in hippocampal slices perfused with high-K(+) and zero-Ca(2+), therefore in the absence of the synaptic circuitry. A network of functional units composes the NEA model. Each functional unit represents one interface of neuronal/extracellular space/glial segments. Each interface contains transmembrane ionic transports, such as ionic channels, cotransporters, exchangers and pumps. Neuronal interconnections are mediated by gap-junctions, electric field effects and extracellular ionic fluctuations modulated by extracellular electrodiffusion. Mechanisms investigated are those that change intracellular and extracellular ionic concentrations and are able to affect [H(+)]. Our simulations suggest that the intense fluctuations in intra and extracellular concentrations of Na(+), K(+) and Cl(-) that accompany NEA are able to affect the combined action of the Na(+)/H(+) exchanger (NHE), [HCO(-)(3)]/Cl(-) exchanger (HCE), H(+) pump and the catalytic activity of intra and extracellular carbonic anhydrase. Cellular volume changes and extracellular electrodiffusion are responsible for modulating pH.

  16. Nonlinear histogram binning for quantitative analysis of lung tissue fibrosis in high-resolution CT data

    NASA Astrophysics Data System (ADS)

    Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.

    2007-03-01

    Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.

  17. Fermilab | Science at Fermilab | Experiments & Projects | Intensity

    Science.gov Websites

    Theory Computing High-performance Computing Grid Computing Networking Mass Storage Plan for the Future List Historic Results Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator

  18. A memory efficient implementation scheme of Gauss error function in a Laguerre-Volterra network for neuroprosthetic devices

    NASA Astrophysics Data System (ADS)

    Li, Will X. Y.; Cui, Ke; Zhang, Wei

    2017-04-01

    Cognitive neural prosthesis is a manmade device which can be used to restore or compensate for lost human cognitive modalities. The generalized Laguerre-Volterra (GLV) network serves as a robust mathematical underpinning for the development of such prosthetic instrument. In this paper, a hardware implementation scheme of Gauss error function for the GLV network targeting reconfigurable platforms is reported. Numerical approximations are formulated which transform the computation of nonelementary function into combinational operations of elementary functions, and memory-intensive look-up table (LUT) based approaches can therefore be circumvented. The computational precision can be made adjustable with the utilization of an error compensation scheme, which is proposed based on the experimental observation of the mathematical characteristics of the error trajectory. The precision can be further customizable by exploiting the run-time characteristics of the reconfigurable system. Compared to the polynomial expansion based implementation scheme, the utilization of slice LUTs, occupied slices, and DSP48E1s on a Xilinx XC6VLX240T field-programmable gate array has decreased by 94.2%, 94.1%, and 90.0%, respectively. While compared to the look-up table based scheme, 1.0 ×1017 bits of storage can be spared under the maximum allowable error of 1.0 ×10-3 . The proposed implementation scheme can be employed in the study of large-scale neural ensemble activity and in the design and development of neural prosthetic device.

  19. Quantitative fluorescence correlation spectroscopy on DNA in living cells

    NASA Astrophysics Data System (ADS)

    Hodges, Cameron; Kafle, Rudra P.; Meiners, Jens-Christian

    2017-02-01

    FCS is a fluorescence technique conventionally used to study the kinetics of fluorescent molecules in a dilute solution. Being a non-invasive technique, it is now drawing increasing interest for the study of more complex systems like the dynamics of DNA or proteins in living cells. Unlike an ordinary dye solution, the dynamics of macromolecules like proteins or entangled DNA in crowded environments is often slow and subdiffusive in nature. This in turn leads to longer residence times of the attached fluorophores in the excitation volume of the microscope and artifacts from photobleaching abound that can easily obscure the signature of the molecular dynamics of interest and make quantitative analysis challenging.We discuss methods and procedures to make FCS applicable to quantitative studies of the dynamics of DNA in live prokaryotic and eukaryotic cells. The intensity autocorrelation is computed function from weighted arrival times of the photons on the detector that maximizes the information content while simultaneously correcting for the effect of photobleaching to yield an autocorrelation function that reflects only the underlying dynamics of the sample. This autocorrelation function in turn is used to calculate the mean square displacement of the fluorophores attached to DNA. The displacement data is more amenable to further quantitative analysis than the raw correlation functions. By using a suitable integral transform of the mean square displacement, we can then determine the viscoelastic moduli of the DNA in its cellular environment. The entire analysis procedure is extensively calibrated and validated using model systems and computational simulations.

  20. On Functional Module Detection in Metabolic Networks

    PubMed Central

    Koch, Ina; Ackermann, Jörg

    2013-01-01

    Functional modules of metabolic networks are essential for understanding the metabolism of an organism as a whole. With the vast amount of experimental data and the construction of complex and large-scale, often genome-wide, models, the computer-aided identification of functional modules becomes more and more important. Since steady states play a key role in biology, many methods have been developed in that context, for example, elementary flux modes, extreme pathways, transition invariants and place invariants. Metabolic networks can be studied also from the point of view of graph theory, and algorithms for graph decomposition have been applied for the identification of functional modules. A prominent and currently intensively discussed field of methods in graph theory addresses the Q-modularity. In this paper, we recall known concepts of module detection based on the steady-state assumption, focusing on transition-invariants (elementary modes) and their computation as minimal solutions of systems of Diophantine equations. We present the Fourier-Motzkin algorithm in detail. Afterwards, we introduce the Q-modularity as an example for a useful non-steady-state method and its application to metabolic networks. To illustrate and discuss the concepts of invariants and Q-modularity, we apply a part of the central carbon metabolism in potato tubers (Solanum tuberosum) as running example. The intention of the paper is to give a compact presentation of known steady-state concepts from a graph-theoretical viewpoint in the context of network decomposition and reduction and to introduce the application of Q-modularity to metabolic Petri net models. PMID:24958145

  1. Automatic evaluation of interferograms

    NASA Technical Reports Server (NTRS)

    Becker, F.

    1982-01-01

    A system for the evaluation of interference patterns was developed. For digitizing and processing of the interferograms from classical and holographic interferometers a picture analysis system based upon a computer with a television digitizer was installed. Depending on the quality of the interferograms, four different picture enhancement operations may be used: Signal averaging; spatial smoothing, subtraction of the overlayed intensity function and the removal of distortion-patterns using a spatial filtering technique in the frequency spectrum of the interferograms. The extraction of fringe loci from the digitized interferograms is performed by a foating-threshold method. The fringes are numbered using a special scheme after the removal of any fringe disconnections which appeared if there was insufficient contrast in the holograms. The reconstruction of the object function from the fringe field uses least squares approximation with spline fit. Applications are given.

  2. Processors for wavelet analysis and synthesis: NIFS and TI-C80 MVP

    NASA Astrophysics Data System (ADS)

    Brooks, Geoffrey W.

    1996-03-01

    Two processors are considered for image quadrature mirror filtering (QMF). The neuromorphic infrared focal-plane sensor (NIFS) is an existing prototype analog processor offering high speed spatio-temporal Gaussian filtering, which could be used for the QMF low- pass function, and difference of Gaussian filtering, which could be used for the QMF high- pass function. Although not designed specifically for wavelet analysis, the biologically- inspired system accomplishes the most computationally intensive part of QMF processing. The Texas Instruments (TI) TMS320C80 Multimedia Video Processor (MVP) is a 32-bit RISC master processor with four advanced digital signal processors (DSPs) on a single chip. Algorithm partitioning, memory management and other issues are considered for optimal performance. This paper presents these considerations with simulated results leading to processor implementation of high-speed QMF analysis and synthesis.

  3. QPROT: Statistical method for testing differential expression using protein-level intensity data in label-free quantitative proteomics.

    PubMed

    Choi, Hyungwon; Kim, Sinae; Fermin, Damian; Tsou, Chih-Chiang; Nesvizhskii, Alexey I

    2015-11-03

    We introduce QPROT, a statistical framework and computational tool for differential protein expression analysis using protein intensity data. QPROT is an extension of the QSPEC suite, originally developed for spectral count data, adapted for the analysis using continuously measured protein-level intensity data. QPROT offers a new intensity normalization procedure and model-based differential expression analysis, both of which account for missing data. Determination of differential expression of each protein is based on the standardized Z-statistic based on the posterior distribution of the log fold change parameter, guided by the false discovery rate estimated by a well-known Empirical Bayes method. We evaluated the classification performance of QPROT using the quantification calibration data from the clinical proteomic technology assessment for cancer (CPTAC) study and a recently published Escherichia coli benchmark dataset, with evaluation of FDR accuracy in the latter. QPROT is a statistical framework with computational software tool for comparative quantitative proteomics analysis. It features various extensions of QSPEC method originally built for spectral count data analysis, including probabilistic treatment of missing values in protein intensity data. With the increasing popularity of label-free quantitative proteomics data, the proposed method and accompanying software suite will be immediately useful for many proteomics laboratories. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. EMPIRICAL DETERMINATION OF EINSTEIN A-COEFFICIENT RATIOS OF BRIGHT [Fe II] LINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giannini, T.; Antoniucci, S.; Nisini, B.

    The Einstein spontaneous rates (A-coefficients) of Fe{sup +} lines have been computed by several authors with results that differ from each other by up to 40%. Consequently, models for line emissivities suffer from uncertainties that in turn affect the determination of the physical conditions at the base of line excitation. We provide an empirical determination of the A-coefficient ratios of bright [Fe II] lines that would represent both a valid benchmark for theoretical computations and a reference for the physical interpretation of the observed lines. With the ESO-Very Large Telescope X-shooter instrument between 3000 Å and 24700 Å, we obtainedmore » a spectrum of the bright Herbig-Haro object HH 1. We detect around 100 [Fe II] lines, some of which with a signal-to-noise ratios ≥100. Among these latter lines, we selected those emitted by the same level, whose dereddened intensity ratios are direct functions of the Einstein A-coefficient ratios. From the same X-shooter spectrum, we got an accurate estimate of the extinction toward HH 1 through intensity ratios of atomic species, H I  recombination lines and H{sub 2} ro-vibrational transitions. We provide seven reliable A-coefficient ratios between bright [Fe II] lines, which are compared with the literature determinations. In particular, the A-coefficient ratios involving the brightest near-infrared lines (λ12570/λ16440 and λ13209/λ16440) are in better agreement with the predictions by the Quinet et al. relativistic Hartree-Fock model. However, none of the theoretical models predict A-coefficient ratios in agreement with all of our determinations. We also show that literature data of near-infrared intensity ratios better agree with our determinations than with theoretical expectations.« less

  5. Studies Of Oxidation And Thermal Reduction Of The Cu(100) Surface Using Positron Annihilation Induced Auger Electron Spectroscopy

    NASA Astrophysics Data System (ADS)

    Fazleev, N. G.; Nadesalingam, M. P.; Maddox, W.; Weiss, A. H.

    2011-06-01

    Positron annihilation induced Auger electron spectroscopy (PAES) measurements from the surface of an oxidized Cu(100) single crystal show a large increase in the intensity of the annihilation induced Cu M2,3VV Auger peak as the sample is subjected to a series of isochronal anneals in vacuum up to annealing temperature 300 °C. The PAES intensity then decreases monotonically as the annealing temperature is increased to ˜550 °C. Experimental positron annihilation probabilities with Cu 3p and O 1s core electrons are estimated from the measured intensities of the positron annihilation induced Cu M2,3VV and O KLL Auger transitions. PAES results are analyzed by performing calculations of positron surface states and annihilation probabilities of the surface-trapped positrons with relevant core electrons taking into account the charge redistribution at the surface and various surface structures associated with low and high oxygen coverages. The variations in atomic structure and chemical composition of the topmost layers of the oxidized Cu(100) surface are found to affect localization and spatial extent of the positron surface state wave function. The computed positron binding energy and annihilation characteristics reveal their sensitivity to charge transfer effects, atomic structure and chemical composition of the topmost layers of the oxidized Cu(100) surface. Theoretical positron annihilation probabilities with Cu 3p and O 1s core electrons computed for the oxidized Cu(100) surface are compared with experimental ones. The obtained results provide a demonstration of thermal reduction of the copper oxide surface after annealing at 300 °C followed by re-oxidation of the Cu(100) surface at higher annealing temperatures presumably due to diffusion of subsurface oxygen to the surface.

  6. Association between increased serum d-serine and cognitive gains induced by intensive cognitive training in schizophrenia.

    PubMed

    Panizzutti, Rogerio; Fisher, Melissa; Garrett, Coleman; Man, Wai Hong; Sena, Walter; Madeira, Caroline; Vinogradov, Sophia

    2018-04-23

    Neuroscience-guided cognitive training induces significant improvement in cognition in schizophrenia subjects, but the biological mechanisms associated with these changes are unknown. In animals, intensive cognitive activity induces increased brain levels of the NMDA-receptor co-agonist d-serine, a molecular system that plays a role in learning-induced neuroplasticity and that may be hypoactive in schizophrenia. Here, we investigated whether training-induced gains in cognition were associated with increases in serum d-serine in outpatients with schizophrenia. Ninety patients with schizophrenia and 53 healthy controls were assessed on baseline serum d-serine, l-serine, and glycine. Schizophrenia subjects performed neurocognitive tests and were assigned to 50 h of either cognitive training of auditory processing systems (N = 47) or a computer games control condition (N = 43), followed by reassessment of cognition and serum amino acids. At study entry, the mean serum d-serine level was significantly lower in schizophrenia subjects vs. healthy subjects, while the glycine levels were significantly higher. There were no significant changes in these measures at a group level after the intervention. However, in the active training group, increased d-serine was significantly and positively correlated with improvements in global cognition and in Verbal Learning. No such associations were observed in the computer games control subjects, and no such associations were found for glycine. d-Serine may be involved in the neurophysiologic changes induced by cognitive training in schizophrenia. Pharmacologic strategies that target d-serine co-agonism of NMDA-receptor functioning may provide a mechanism for enhancing the behavioral effects of intensive cognitive training. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Studies Of Oxidation And Thermal Reduction Of The Cu(100) Surface Using Positron Annihilation Induced Auger Electron Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fazleev, N. G.; Department of Physics, Kazan State University, Kazan 420008; Nadesalingam, M. P.

    2011-06-01

    Positron annihilation induced Auger electron spectroscopy (PAES) measurements from the surface of an oxidized Cu(100) single crystal show a large increase in the intensity of the annihilation induced Cu M2,3VV Auger peak as the sample is subjected to a series of isochronal anneals in vacuum up to annealing temperature 300 deg. C. The PAES intensity then decreases monotonically as the annealing temperature is increased to {approx}550 deg. C. Experimental positron annihilation probabilities with Cu 3p and O 1s core electrons are estimated from the measured intensities of the positron annihilation induced Cu M{sub 2,3}VV and O KLL Auger transitions. PAESmore » results are analyzed by performing calculations of positron surface states and annihilation probabilities of the surface-trapped positrons with relevant core electrons taking into account the charge redistribution at the surface and various surface structures associated with low and high oxygen coverages. The variations in atomic structure and chemical composition of the topmost layers of the oxidized Cu(100) surface are found to affect localization and spatial extent of the positron surface state wave function. The computed positron binding energy and annihilation characteristics reveal their sensitivity to charge transfer effects, atomic structure and chemical composition of the topmost layers of the oxidized Cu(100) surface. Theoretical positron annihilation probabilities with Cu 3p and O 1s core electrons computed for the oxidized Cu(100) surface are compared with experimental ones. The obtained results provide a demonstration of thermal reduction of the copper oxide surface after annealing at 300 deg. C followed by re-oxidation of the Cu(100) surface at higher annealing temperatures presumably due to diffusion of subsurface oxygen to the surface.« less

  8. Computer Activities for Persons With Dementia.

    PubMed

    Tak, Sunghee H; Zhang, Hongmei; Patel, Hetal; Hong, Song Hee

    2015-06-01

    The study examined participant's experience and individual characteristics during a 7-week computer activity program for persons with dementia. The descriptive study with mixed methods design collected 612 observational logs of computer sessions from 27 study participants, including individual interviews before and after the program. Quantitative data analysis included descriptive statistics, correlational coefficients, t-test, and chi-square. Content analysis was used to analyze qualitative data. Each participant averaged 23 sessions and 591min for 7 weeks. Computer activities included slide shows with music, games, internet use, and emailing. On average, they had a high score of intensity in engagement per session. Women attended significantly more sessions than men. Higher education level was associated with a higher number of different activities used per session and more time spent on online games. Older participants felt more tired. Feeling tired was significantly correlated with a higher number of weeks with only one session attendance per week. More anticholinergic medications taken by participants were significantly associated with a higher percentage of sessions with disengagement. The findings were significant at p < .05. Qualitative content analysis indicated tailoring computer activities appropriate to individual's needs and functioning is critical. All participants needed technical assistance. A framework for tailoring computer activities may provide guidance on developing and maintaining treatment fidelity of tailored computer activity interventions among persons with dementia. Practice guidelines and education protocols may assist caregivers and service providers to integrate computer activities into homes and aging services settings. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Design of Web-based Management Information System for Academic Degree & Graduate Education

    NASA Astrophysics Data System (ADS)

    Duan, Rui; Zhang, Mingsheng

    For every organization, the management information system is not only a computer-based human-machine system that can support and help the administrative supervisor but also an open technology system for society. It should supply the interaction function that face the organization and environment, besides gather, transmit and save the information. The authors starts with the intension of contingency theory and design a web-based management information system for academic degree & graduate education which is based on analyzing of work flow of domestic academic degree and graduate education system. What's more, the application of the system is briefly introduced in this paper.

  10. Light scattering properties of spheroidal particles

    NASA Technical Reports Server (NTRS)

    Asano, S.

    1979-01-01

    In the present paper, the light scattering characteristics of spheroidal particles are evaluated within the framework of a scattering theory developed for a homogeneous isotropic spheroid. This approach is shown to be well suited for computing the scattering quantities of spheroidal particles of fairly large sizes (up to a size parameter of 30). The effects of particle size, shape, index of refraction, and orientation on the scattering efficiency factors and the scattering intensity functions are studied and interpreted physically. It is shown that, in the case of oblique incidence, the scattering properties of a long slender prolate spheroid resemble those of an infinitely long circular cylinder.

  11. Dynamical structure factor of the J1-J2 Heisenberg model in one dimension: The variational Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Ferrari, Francesco; Parola, Alberto; Sorella, Sandro; Becca, Federico

    2018-06-01

    The dynamical spin structure factor is computed within a variational framework to study the one-dimensional J1-J2 Heisenberg model. Starting from Gutzwiller-projected fermionic wave functions, the low-energy spectrum is constructed from two-spinon excitations. The direct comparison with Lanczos calculations on small clusters demonstrates the excellent description of both gapless and gapped (dimerized) phases, including incommensurate structures for J2/J1>0.5 . Calculations on large clusters show how the intensity evolves when increasing the frustrating ratio and give an unprecedented accurate characterization of the dynamical properties of (nonintegrable) frustrated spin models.

  12. Parallel logic gates in synthetic gene networks induced by non-Gaussian noise.

    PubMed

    Xu, Yong; Jin, Xiaoqin; Zhang, Huiqing

    2013-11-01

    The recent idea of logical stochastic resonance is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.

  13. UK-5 Van Allen belt radiation exposure: A special study to determine the trapped particle intensities on the UK-5 satellite with spatial mapping of the ambient flux environment

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1972-01-01

    Vehicle encountered electron and proton fluxes were calculated for a set of nominal UK-5 trajectories with new computational methods and new electron environment models. Temporal variations in the electron data were considered and partially accounted for. Field strength calculations were performed with an extrapolated model on the basis of linear secular variation predictions. Tabular maps for selected electron and proton energies were constructed as functions of latitude and longitude for specified altitudes. Orbital flux integration results are presented in graphical and tabular form; they are analyzed, explained, and discussed.

  14. XaNSoNS: GPU-accelerated simulator of diffraction patterns of nanoparticles

    NASA Astrophysics Data System (ADS)

    Neverov, V. S.

    XaNSoNS is an open source software with GPU support, which simulates X-ray and neutron 1D (or 2D) diffraction patterns and pair-distribution functions (PDF) for amorphous or crystalline nanoparticles (up to ∼107 atoms) of heterogeneous structural content. Among the multiple parameters of the structure the user may specify atomic displacements, site occupancies, molecular displacements and molecular rotations. The software uses general equations nonspecific to crystalline structures to calculate the scattering intensity. It supports four major standards of parallel computing: MPI, OpenMP, Nvidia CUDA and OpenCL, enabling it to run on various architectures, from CPU-based HPCs to consumer-level GPUs.

  15. Data Intensive Computing on Amazon Web Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magana-Zook, S. A.

    The Geophysical Monitoring Program (GMP) has spent the past few years building up the capability to perform data intensive computing using what have been referred to as “big data” tools. These big data tools would be used against massive archives of seismic signals (>300 TB) to conduct research not previously possible. Examples of such tools include Hadoop (HDFS, MapReduce), HBase, Hive, Storm, Spark, Solr, and many more by the day. These tools are useful for performing data analytics on datasets that exceed the resources of traditional analytic approaches. To this end, a research big data cluster (“Cluster A”) was setmore » up as a collaboration between GMP and Livermore Computing (LC).« less

  16. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  17. On the possibility of non-invasive multilayer temperature estimation using soft-computing methods.

    PubMed

    Teixeira, C A; Pereira, W C A; Ruano, A E; Ruano, M Graça

    2010-01-01

    This work reports original results on the possibility of non-invasive temperature estimation (NITE) in a multilayered phantom by applying soft-computing methods. The existence of reliable non-invasive temperature estimator models would improve the security and efficacy of thermal therapies. These points would lead to a broader acceptance of this kind of therapies. Several approaches based on medical imaging technologies were proposed, magnetic resonance imaging (MRI) being appointed as the only one to achieve the acceptable temperature resolutions for hyperthermia purposes. However, MRI intrinsic characteristics (e.g., high instrumentation cost) lead us to use backscattered ultrasound (BSU). Among the different BSU features, temporal echo-shifts have received a major attention. These shifts are due to changes of speed-of-sound and expansion of the medium. The originality of this work involves two aspects: the estimator model itself is original (based on soft-computing methods) and the application to temperature estimation in a three-layer phantom is also not reported in literature. In this work a three-layer (non-homogeneous) phantom was developed. The two external layers were composed of (in % of weight): 86.5% degassed water, 11% glycerin and 2.5% agar-agar. The intermediate layer was obtained by adding graphite powder in the amount of 2% of the water weight to the above composition. The phantom was developed to have attenuation and speed-of-sound similar to in vivo muscle, according to the literature. BSU signals were collected and cumulative temporal echo-shifts computed. These shifts and the past temperature values were then considered as possible estimators inputs. A soft-computing methodology was applied to look for appropriate multilayered temperature estimators. The methodology involves radial-basis functions neural networks (RBFNN) with structure optimized by the multi-objective genetic algorithm (MOGA). In this work 40 operating conditions were considered, i.e. five 5-mm spaced spatial points and eight therapeutic intensities (I(SATA)): 0.3, 0.5, 0.7, 1.0, 1.3, 1.5, 1.7 and 2.0W/cm(2). Models were trained and selected to estimate temperature at only four intensities, then during the validation phase, the best-fitted models were analyzed in data collected at the eight intensities. This procedure leads to a more realistic evaluation of the generalisation level of the best-obtained structures. At the end of the identification phase, 82 (preferable) estimator models were achieved. The majority of them present an average maximum absolute error (MAE) inferior to 0.5 degrees C. The best-fitted estimator presents a MAE of only 0.4 degrees C for both the 40 operating conditions. This means that the gold-standard maximum error (0.5 degrees C) pointed for hyperthermia was fulfilled independently of the intensity and spatial position considered, showing the improved generalisation capacity of the identified estimator models. As the majority of the preferable estimator models, the best one presents 6 inputs and 11 neurons. In addition to the appropriate error performance, the estimator models present also a reduced computational complexity and then the possibility to be applied in real-time. A non-invasive temperature estimation model, based on soft-computing technique, was proposed for a three-layered phantom. The best-achieved estimator models presented an appropriate error performance regardless of the spatial point considered (inside or at the interface of the layers) and of the intensity applied. Other methodologies published so far, estimate temperature only in homogeneous media. The main drawback of the proposed methodology is the necessity of a-priory knowledge of the temperature behavior. Data used for training and optimisation should be representative, i.e., they should cover all possible physical situations of the estimation environment.

  18. Resampling: A Marriage of Computers and Statistics. ERIC/TM Digest.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Shafer, Mary Morello

    Advances in computer technology are making it possible for educational researchers to use simpler statistical methods to address a wide range of questions with smaller data sets and fewer, and less restrictive, assumptions. This digest introduces computationally intensive statistics, collectively called resampling techniques. Resampling is a…

  19. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  20. BASIC Language Flow Charting Program (BASCHART). Technical Note 3-82.

    ERIC Educational Resources Information Center

    Johnson, Charles C.; And Others

    This document describes BASCHART, a computer aid designed to decipher and automatically flow chart computer program logic; it also provides the computer code necessary for this process. Developed to reduce the labor intensive manual process of producing a flow chart for an undocumented or inadequately documented program, BASCHART will…

Top