Sample records for distribution function technique

  1. 3D ion velocity distribution function measurement in an electric thruster using laser induced fluorescence tomography

    NASA Astrophysics Data System (ADS)

    Elias, P. Q.; Jarrige, J.; Cucchetti, E.; Cannat, F.; Packan, D.

    2017-09-01

    Measuring the full ion velocity distribution function (IVDF) by non-intrusive techniques can improve our understanding of the ionization processes and beam dynamics at work in electric thrusters. In this paper, a Laser-Induced Fluorescence (LIF) tomographic reconstruction technique is applied to the measurement of the IVDF in the plume of a miniature Hall effect thruster. A setup is developed to move the laser axis along two rotation axes around the measurement volume. The fluorescence spectra taken from different viewing angles are combined using a tomographic reconstruction algorithm to build the complete 3D (in phase space) time-averaged distribution function. For the first time, this technique is used in the plume of a miniature Hall effect thruster to measure the full distribution function of the xenon ions. Two examples of reconstructions are provided, in front of the thruster nose-cone and in front of the anode channel. The reconstruction reveals the features of the ion beam, in particular on the thruster axis where a toroidal distribution function is observed. These findings are consistent with the thruster shape and operation. This technique, which can be used with other LIF schemes, could be helpful in revealing the details of the ion production regions and the beam dynamics. Using a more powerful laser source, the current implementation of the technique could be improved to reduce the measurement time and also to reconstruct the temporal evolution of the distribution function.

  2. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  3. Covariant extension of the GPD overlap representation at low Fock states

    DOE PAGES

    Chouika, N.; Mezrag, C.; Moutarde, H.; ...

    2017-12-26

    Here, we present a novel approach to compute generalized parton distributions within the lightfront wave function overlap framework. We show how to systematically extend generalized parton distributions computed within the DGLAP region to the ERBL one, fulfilling at the same time both the polynomiality and positivity conditions. We exemplify our method using pion lightfront wave functions inspired by recent results of non-perturbative continuum techniques and algebraic nucleon lightfront wave functions. We also test the robustness of our algorithm on reggeized phenomenological parameterizations. This approach paves the way to a better understanding of the nucleon structure from non-perturbative techniques and tomore » a unification of generalized parton distributions and transverse momentum dependent parton distribution functions phenomenology through lightfront wave functions.« less

  4. Optimal startup control of a jacketed tubular reactor.

    NASA Technical Reports Server (NTRS)

    Hahn, D. R.; Fan, L. T.; Hwang, C. L.

    1971-01-01

    The optimal startup policy of a jacketed tubular reactor, in which a first-order, reversible, exothermic reaction takes place, is presented. A distributed maximum principle is presented for determining weak necessary conditions for optimality of a diffusional distributed parameter system. A numerical technique is developed for practical implementation of the distributed maximum principle. This involves the sequential solution of the state and adjoint equations, in conjunction with a functional gradient technique for iteratively improving the control function.

  5. Analyzing Distributed Functions in an Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Massie, Michael J.

    2010-01-01

    Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.

  6. Electron distribution functions in electric field environments

    NASA Technical Reports Server (NTRS)

    Rudolph, Terence H.

    1991-01-01

    The amount of current carried by an electric discharge in its early stages of growth is strongly dependent on its geometrical shape. Discharges with a large number of branches, each funnelling current to a common stem, tend to carry more current than those with fewer branches. The fractal character of typical discharges was simulated using stochastic models based on solutions of the Laplace equation. Extension of these models requires the use of electron distribution functions to describe the behavior of electrons in the undisturbed medium ahead of the discharge. These electrons, interacting with the electric field, determine the propagation of branches in the discharge and the way in which further branching occurs. The first phase in the extension of the referenced models , the calculation of simple electron distribution functions in an air/electric field medium, is discussed. Two techniques are investigated: (1) the solution of the Boltzmann equation in homogeneous, steady state environments, and (2) the use of Monte Carlo simulations. Distribution functions calculated from both techniques are illustrated. Advantages and disadvantages of each technique are discussed.

  7. Detecting background changes in environments with dynamic foreground by separating probability distribution function mixtures using Pearson's method of moments

    NASA Astrophysics Data System (ADS)

    Jenkins, Colleen; Jordan, Jay; Carlson, Jeff

    2007-02-01

    This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.

  8. Vector wind and vector wind shear models 0 to 27 km altitude for Cape Kennedy, Florida, and Vandenberg AFB, California

    NASA Technical Reports Server (NTRS)

    Smith, O. E.

    1976-01-01

    The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.

  9. An Estimation of the Gamma-Ray Burst Afterglow Apparent Optical Brightness Distribution Function

    NASA Astrophysics Data System (ADS)

    Akerlof, Carl W.; Swan, Heather F.

    2007-12-01

    By using recent publicly available observational data obtained in conjunction with the NASA Swift gamma-ray burst (GRB) mission and a novel data analysis technique, we have been able to make some rough estimates of the GRB afterglow apparent optical brightness distribution function. The results suggest that 71% of all burst afterglows have optical magnitudes with mR<22.1 at 1000 s after the burst onset, the dimmest detected object in the data sample. There is a strong indication that the apparent optical magnitude distribution function peaks at mR~19.5. Such estimates may prove useful in guiding future plans to improve GRB counterpart observation programs. The employed numerical techniques might find application in a variety of other data analysis problems in which the intrinsic distributions must be inferred from a heterogeneous sample.

  10. Improved Tandem Measurement Techniques for Aerosol Particle Analysis

    NASA Astrophysics Data System (ADS)

    Rawat, Vivek Kumar

    Non-spherical, chemically inhomogeneous (complex) nanoparticles are encountered in a number of natural and engineered environments, including combustion systems (which produces highly non-spherical aggregates), reactors used in gas-phase materials synthesis of doped or multicomponent materials, and in ambient air. These nanoparticles are often highly diverse in size, composition and shape, and hence require determination of property distribution functions for accurate characterization. This thesis focuses on development of tandem mobility-mass measurement techniques coupled with appropriate data inversion routines to facilitate measurement of two dimensional size-mass distribution functions while correcting for the non-idealities of the instruments. Chapter 1 provides the detailed background and motivation for the studies performed in this thesis. In chapter 2, the development of an inversion routine is described which is employed to determine two dimensional size-mass distribution functions from Differential Mobility Analyzer-Aerosol Particle Mass analyzer tandem measurements. Chapter 3 demonstrates the application of the two dimensional distribution function to compute cumulative mass distribution function and also evaluates the validity of this technique by comparing the calculated total mass concentrations to measured values for a variety of aerosols. In Chapter 4, this tandem measurement technique with the inversion routine is employed to analyze colloidal suspensions. Chapter 5 focuses on application of a transverse modulation ion mobility spectrometer coupled with a mass spectrometer to study the effect of vapor dopants on the mobility shifts of sub 2 nm peptide ion clusters. These mobility shifts are then compared to models based on vapor uptake theories. Finally, in Chapter 6, a conclusion of all the studies performed in this thesis is provided and future avenues of research are discussed.

  11. Measurement of Device Parameters Using Image Recovery Techniques in Large-Scale IC Devices

    NASA Technical Reports Server (NTRS)

    Scheick, Leif; Edmonds, Larry

    2004-01-01

    Devices that respond to radiation on a cell level will produce histograms showing the relative frequency of cell damage as a function of damage. The measured distribution is the convolution of distributions from radiation responses, measurement noise, and manufacturing parameters. A method of extracting device characteristics and parameters from measured distributions via mathematical and image subtraction techniques is described.

  12. A grid spacing control technique for algebraic grid generation methods

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Kudlinski, R. A.; Everton, E. L.

    1982-01-01

    A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.

  13. Analyzing coastal environments by means of functional data analysis

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.

    2017-07-01

    Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.

  14. Ozone data and mission sampling analysis

    NASA Technical Reports Server (NTRS)

    Robbins, J. L.

    1980-01-01

    A methodology was developed to analyze discrete data obtained from the global distribution of ozone. Statistical analysis techniques were applied to describe the distribution of data variance in terms of empirical orthogonal functions and components of spherical harmonic models. The effects of uneven data distribution and missing data were considered. Data fill based on the autocorrelation structure of the data is described. Computer coding of the analysis techniques is included.

  15. Reconstruction of fiber grating period profiles by use of Wigner-Ville distributions and spectrograms.

    PubMed

    Azaña, J; Muriel, M A

    2000-12-01

    The grating-period profile and length of an arbitrary fiber Bragg grating structure can be reconstructed from the structure's reflection response by use of a time-frequency signal representation based on the well-known Wigner-Ville distribution and spectrogram. We present a detailed description of this synthesis technique. By means of numerical simulations, the technique is tested with several fiber grating structures. In general, our results show good agreement between exact and reconstructed functions. The technique's advantages and limitations are discussed. We propose and demonstrate the application of the proposed synthesis technique to distributed mechanical strain or temperature sensing.

  16. Functional requirements for an intelligent RPC. [remote power controller for spaceborne electrical distribution system

    NASA Technical Reports Server (NTRS)

    Aucoin, B. M.; Heller, R. P.

    1990-01-01

    An intelligent remote power controller (RPC) based on microcomputer technology can implement advanced functions for the accurate and secure detection of all types of faults on a spaceborne electrical distribution system. The intelligent RPC will implement conventional protection functions such as overcurrent, under-voltage, and ground fault protection. Advanced functions for the detection of soft faults, which cannot presently be detected, can also be implemented. Adaptive overcurrent protection changes overcurrent settings based on connected load. Incipient and high-impedance fault detection provides early detection of arcing conditions to prevent fires, and to clear and reconfigure circuits before soft faults progress to a hard-fault condition. Power electronics techniques can be used to implement fault current limiting to prevent voltage dips during hard faults. It is concluded that these techniques will enhance the overall safety and reliability of the distribution system.

  17. An effective inversion algorithm for retrieving bimodal aerosol particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming

    2014-12-01

    The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ⪢1 and |m-1|⪡1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.

  18. High-pressure pair distribution function (PDF) measurement using high-energy focused x-ray beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Xinguo, E-mail: xhong@bnl.gov; Weidner, Donald J.; Ehm, Lars

    In this paper, we report recent development of the high-pressure pair distribution function (HP-PDF) measurement technique using a focused high-energy X-ray beam coupled with a diamond anvil cell (DAC). The focusing optics consist of a sagittally bent Laue monochromator and Kirkpatrick-Baez (K–B) mirrors. This combination provides a clean high-energy X-ray beam suitable for HP-PDF research. Demonstration of the HP-PDF technique for nanocrystalline platinum under quasi-hydrostatic condition above 30 GPa is presented.

  19. Electron energy distribution function in the divertor region of the COMPASS tokamak during neutral beam injection heating

    NASA Astrophysics Data System (ADS)

    Hasan, E.; Dimitrova, M.; Havlicek, J.; Mitošinková, K.; Stöckel, J.; Varju, J.; Popov, Tsv K.; Komm, M.; Dejarnac, R.; Hacek, P.; Panek, R.; the COMPASS Team

    2018-02-01

    This paper presents the results from swept probe measurements in the divertor region of the COMPASS tokamak in D-shaped, L-mode discharges, with toroidal magnetic field BT = 1.15 T, plasma current Ip = 180 kA and line-average electron densities varying from 2 to 8×1019 m-3. Using neutral beam injection heating, the electron energy distribution function is studied before and during the application of the beam. The current-voltage characteristics data are processed using the first-derivative probe technique. This technique allows one to evaluate the plasma potential and the real electron energy distribution function (respectively, the electron temperatures and densities). At the low average electron density of 2×1019 m-3, the electron energy distribution function is bi-Maxwellian with a low-energy electron population with temperatures 4-6 eV and a high-energy electron group 12-25 eV. As the line-average electron density is increased, the electron temperatures decrease. At line-average electron densities above 7×1019 m-3, the electron energy distribution function is found to be Maxwellian with a temperature of 6-8.5 eV. The effect of the neutral beam injection heating power in the divertor region is also studied.

  20. Application of Image Analysis for Characterization of Spatial Arrangements of Features in Microstructure

    NASA Technical Reports Server (NTRS)

    Louis, Pascal; Gokhale, Arun M.

    1995-01-01

    A number of microstructural processes are sensitive to the spatial arrangements of features in microstructure. However, very little attention has been given in the past to the experimental measurements of the descriptors of microstructural distance distributions due to the lack of practically feasible methods. We present a digital image analysis procedure to estimate the micro-structural distance distributions. The application of the technique is demonstrated via estimation of K function, radial distribution function, and nearest-neighbor distribution function of hollow spherical carbon particulates in a polymer matrix composite, observed in a metallographic section.

  1. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  2. Calculation of the Poisson cumulative distribution function

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

    1990-01-01

    A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

  3. Active distribution network planning considering linearized system loss

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Wang, Mingqiang; Xu, Hao

    2018-02-01

    In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.

  4. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  5. Skin dose mapping for non-uniform x-ray fields using a backscatter point spread function

    NASA Astrophysics Data System (ADS)

    Vijayan, Sarath; Xiong, Zhenyu; Shankar, Alok; Rudin, Stephen; Bednarek, Daniel R.

    2017-03-01

    Beam shaping devices like ROI attenuators and compensation filters modulate the intensity distribution of the xray beam incident on the patient. This results in a spatial variation of skin dose due to the variation of primary radiation and also a variation in backscattered radiation from the patient. To determine the backscatter component, backscatter point spread functions (PSF) are generated using EGS Monte-Carlo software. For this study, PSF's were determined by simulating a 1 mm beam incident on the lateral surface of an anthropomorphic head phantom and a 20 cm thick PMMA block phantom. The backscatter PSF's for the head phantom and PMMA phantom are curve fit with a Lorentzian function after being normalized to the primary dose intensity (PSFn). PSFn is convolved with the primary dose distribution to generate the scatter dose distribution, which is added to the primary to obtain the total dose distribution. The backscatter convolution technique is incorporated in the dose tracking system (DTS), which tracks skin dose during fluoroscopic procedures and provides a color map of the dose distribution on a 3D patient graphic model. A convolution technique is developed for the backscatter dose determination for the nonuniformly spaced graphic-model surface vertices. A Gafchromic film validation was performed for shaped x-ray beams generated with an ROI attenuator and with two compensation filters inserted into the field. The total dose distribution calculated by the backscatter convolution technique closely agreed with that measured with the film.

  6. Method and device for landing aircraft dependent on runway occupancy time

    NASA Technical Reports Server (NTRS)

    Ghalebsaz Jeddi, Babak (Inventor)

    2012-01-01

    A technique for landing aircraft using an aircraft landing accident avoidance device is disclosed. The technique includes determining at least two probability distribution functions; determining a safe lower limit on a separation between a lead aircraft and a trail aircraft on a glide slope to the runway; determining a maximum sustainable safe attempt-to-land rate on the runway based on the safe lower limit and the probability distribution functions; directing the trail aircraft to enter the glide slope with a target separation from the lead aircraft corresponding to the maximum sustainable safe attempt-to-land rate; while the trail aircraft is in the glide slope, determining an actual separation between the lead aircraft and the trail aircraft; and directing the trail aircraft to execute a go-around maneuver if the actual separation approaches the safe lower limit. Probability distribution functions include runway occupancy time, and landing time interval and/or inter-arrival distance.

  7. An Overview Of Wideband Signal Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Speiser, Jeffrey M.; Whitehouse, Harper J.

    1989-11-01

    This paper provides a unifying perspective for several narowband and wideband signal processing techniques. It considers narrowband ambiguity functions and Wigner-Ville distibutions, together with the wideband ambiguity function and several proposed approaches to a wideband version of the Wigner-Ville distribution (WVD). A unifying perspective is provided by the methodology of unitary representations and ray representations of transformation groups.

  8. Planar Laser Imaging of Sprays for Liquid Rocket Studies

    NASA Technical Reports Server (NTRS)

    Lee, W.; Pal, S.; Ryan, H. M.; Strakey, P. A.; Santoro, Robert J.

    1990-01-01

    A planar laser imaging technique which incorporates an optical polarization ratio technique for droplet size measurement was studied. A series of pressure atomized water sprays were studied with this technique and compared with measurements obtained using a Phase Doppler Particle Analyzer. In particular, the effects of assuming a logarithmic normal distribution function for the droplet size distribution within a spray was evaluated. Reasonable agreement between the instrument was obtained for the geometric mean diameter of the droplet distribution. However, comparisons based on the Sauter mean diameter show larger discrepancies, essentially because of uncertainties in the appropriate standard deviation to be applied for the polarization ratio technique. Comparisons were also made between single laser pulse (temporally resolved) measurements with multiple laser pulse visualizations of the spray.

  9. Fixed and Data Adaptive Kernels in Cohen’s Class of Time-Frequency Distributions

    DTIC Science & Technology

    1992-09-01

    translated into its associated analytic signal by using the techniques discussed in Chapter Four. 1. Wigner - Ville Distribution function PS = wvd (data,winlen...step,begin,theend) % PS = wvd (data,winlen,step,begin,theend) % ’wvd.ml returns the Wigner - Ville time-frequency distribution % for the input data...12 IV. FIXED KERNEL DISTRIBUTIONS .................................................................. 19 A. WIGNER - VILLE DISTRIBUTION

  10. Simulation of particle size distributions in Polar Mesospheric Clouds from Microphysical Models

    NASA Astrophysics Data System (ADS)

    Thomas, G. E.; Merkel, A.; Bardeen, C.; Rusch, D. W.; Lumpe, J. D.

    2009-12-01

    The size distribution of ice particles is perhaps the most important observable aspect of microphysical processes in Polar Mesospheric Cloud (PMC) formation and evolution. A conventional technique to derive such information is from optical observation of scattering, either passive solar scattering from photometric or spectrometric techniques, or active backscattering by lidar. We present simulated size distributions from two state-of-the-art models using CARMA sectional microphysics: WACCM/CARMA, in which CARMA is interactively coupled with WACCM3 (Bardeen et al, 2009), and stand-alone CARMA forced by WACCM3 meteorology (Merkel et al, this meeting). Both models provide well-resolved size distributions of ice particles as a function of height, location and time for realistic high-latitude summertime conditions. In this paper we present calculations of the UV scattered brightness at multiple scattering angles as viewed by the AIM Cloud Imaging and Particle Size (CIPS) satellite experiment. These simulations are then considered discretely-sampled “data” for the scattering phase function, which are inverted using a technique (Lumpe et al, this meeting) to retrieve particle size information. We employ a T-matrix scattering code which applies to a wide range of non-sphericity of the ice particles, using the conventional idealized prolate/oblate spheroidal shape. This end-to-end test of the relatively new scattering phase function technique provides insight into both the retrieval accuracy and the information content in passive remote sensing of PMC.

  11. Mathematical Methods for Optical Physics and Engineering

    NASA Astrophysics Data System (ADS)

    Gbur, Gregory J.

    2011-01-01

    1. Vector algebra; 2. Vector calculus; 3. Vector calculus in curvilinear coordinate systems; 4. Matrices and linear algebra; 5. Advanced matrix techniques and tensors; 6. Distributions; 7. Infinite series; 8. Fourier series; 9. Complex analysis; 10. Advanced complex analysis; 11. Fourier transforms; 12. Other integral transforms; 13. Discrete transforms; 14. Ordinary differential equations; 15. Partial differential equations; 16. Bessel functions; 17. Legendre functions and spherical harmonics; 18. Orthogonal functions; 19. Green's functions; 20. The calculus of variations; 21. Asymptotic techniques; Appendices; References; Index.

  12. Calculations of the Electron Energy Distribution Function in a Uranium Plasma by Analytic and Monte Carlo Techniques. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bathke, C. G.

    1976-01-01

    Electron energy distribution functions were calculated in a U235 plasma at 1 atmosphere for various plasma temperatures and neutron fluxes. The distributions are assumed to be a summation of a high energy tail and a Maxwellian distribution. The sources of energetic electrons considered are the fission-fragment induced ionization of uranium and the electron induced ionization of uranium. The calculation of the high energy tail is reduced to an electron slowing down calculation, from the most energetic source to the energy where the electron is assumed to be incorporated into the Maxwellian distribution. The pertinent collisional processes are electron-electron scattering and electron induced ionization and excitation of uranium. Two distinct methods were employed in the calculation of the distributions. One method is based upon the assumption of continuous slowing and yields a distribution inversely proportional to the stopping power. An iteration scheme is utilized to include the secondary electron avalanche. In the other method, a governing equation is derived without assuming continuous electron slowing. This equation is solved by a Monte Carlo technique.

  13. Determination of Distance Distribution Functions by Singlet-Singlet Energy Transfer

    PubMed Central

    Cantor, Charles R.; Pechukas, Philip

    1971-01-01

    The efficiency of energy transfer between two chromophores can be used to define an apparent donor-acceptor distance, which in flexible systems will depend on the R0 of the chromophores. If efficiency is measured as a function of R0, it will be possible to determine the actual distribution function of donor-acceptor distances. Numerical procedures are described for extracting this information from experimental data. They should be most useful for distribution functions with mean values from 20-30 Å (2-3 nm). This technique should provide considerably more detailed information on end-to-end distributions of oligomers than has hitherto been available. It should also be useful for describing, in detail, conformational flexibility in other large molecules. PMID:16591942

  14. Green's Functions in Space and Time.

    ERIC Educational Resources Information Center

    Rowe, E. G. Peter

    1979-01-01

    Gives a sketch of some topics in distribution theory that is technically simple, yet provides techniques for handling the partial differential equations satisfied by the most important Green's functions in physics. (Author/GA)

  15. Hawaiian Electric Advanced Inverter Test Plan - Result Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoke, Anderson; Nelson, Austin; Prabakar, Kumaraguru

    This presentation is intended to share the results of lab testing of five PV inverters with the Hawaiian Electric Companies and other stakeholders and interested parties. The tests included baseline testing of advanced inverter grid support functions, as well as distribution circuit-level tests to examine the impact of the PV inverters on simulated distribution feeders using power hardware-in-the-loop (PHIL) techniques. hardware-in-the-loop (PHIL) techniques.

  16. Plasma potential and electron temperature evaluated by ball-pen and Langmuir probes in the COMPASS tokamak

    NASA Astrophysics Data System (ADS)

    Dimitrova, M.; Popov, Tsv K.; Adamek, J.; Kovačič, J.; Ivanova, P.; Hasan, E.; López-Bruna, D.; Seidl, J.; Vondráček, P.; Dejarnac, R.; Stöckel, J.; Imríšek, M.; Panek, R.; the COMPASS Team

    2017-12-01

    The radial distributions of the main plasma parameters in the scrape-off-layer of the COMPASS tokamak are measured during L-mode and H-mode regimes by using both Langmuir and ball-pen probes mounted on a horizontal reciprocating manipulator. The radial profile of the plasma potential derived previously from Langmuir probes data by using the first derivative probe technique is compared with data derived using ball-pen probes. A good agreement can be seen between the data acquired by the two techniques during the L-mode discharge and during the H-mode regime within the inter-ELM periods. In contrast with the first derivative probe technique, the ball-pen probe technique does not require a swept voltage and, therefore, the temporal resolution is only limited by the data acquisition system. In the electron temperature evaluation, in the far scrape-off layer and in the limiter shadow, where the electron energy distribution is Maxwellian, the results from both techniques match well. In the vicinity of the last closed flux surface, where the electron energy distribution function is bi-Maxwellian, the ball-pen probe technique results are in agreement with the high-temperature components of the electron distribution only. We also discuss the application of relatively large Langmuir probes placed in parallel and perpendicularly to the magnetic field lines to studying the main plasma parameters. The results obtained by the two types of the large probes agree well. They are compared with Thomson scattering data for electron temperatures and densities. The results for the electron densities are compared also with the results from ASTRA code calculation of the electron source due to the ionization of the neutrals by fast electrons and the origin of the bi-Maxwellian electron energy distribution function is briefly discussed.

  17. Nuclear risk analysis of the Ulysses mission

    NASA Astrophysics Data System (ADS)

    Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W.

    An account is given of the method used to quantify the risks accruing to the use of a radioisotope thermoelectric generator fueled by Pu-238 dioxide aboard the Space Shuttle-launched Ulysses mission. After using a Monte Carlo technique to develop probability distributions for the radiological consequences of a range of accident scenarios throughout the mission, factors affecting those consequences are identified in conjunction with their probability distributions. The functional relationship among all the factors is then established, and probability distributions for all factor effects are combined by means of a Monte Carlo technique.

  18. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  19. Solution of QCD⊗QED coupled DGLAP equations at NLO

    NASA Astrophysics Data System (ADS)

    Zarrin, S.; Boroun, G. R.

    2017-09-01

    In this work, we present an analytical solution for QCD⊗QED coupled Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations at the leading order (LO) accuracy in QED and next-to-leading order (NLO) accuracy in perturbative QCD using double Laplace transform. This technique is applied to obtain the singlet, gluon and photon distribution functions and also the proton structure function. We also obtain contribution of photon in proton at LO and NLO at high energy and successfully compare the proton structure function with HERA data [1] and APFEL results [2]. Some comparisons also have been done for the singlet and gluon distribution functions with the MSTW results [3]. In addition, the contribution of photon distribution function inside the proton has been compared with results of MRST [4] and with the contribution of sea quark distribution functions which obtained by MSTW [3] and CTEQ6M [5].

  20. Modelling and validation of particle size distributions of supported nanoparticles using the pair distribution function technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.

    The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately andin situusing crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormal size distribution.more » The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. This work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less

  1. Differential subcellular distribution of ion channels and the diversity of neuronal function.

    PubMed

    Nusser, Zoltan

    2012-06-01

    Following the astonishing molecular diversity of voltage-gated ion channels that was revealed in the past few decades, the ion channel repertoire expressed by neurons has been implicated as the major factor governing their functional heterogeneity. Although the molecular structure of ion channels is a key determinant of their biophysical properties, their subcellular distribution and densities on the surface of nerve cells are just as important for fulfilling functional requirements. Recent results obtained with high resolution quantitative localization techniques revealed complex, subcellular compartment-specific distribution patterns of distinct ion channels. Here I suggest that within a given neuron type every ion channel has a unique cell surface distribution pattern, with the functional consequence that this dramatically increases the computational power of nerve cells. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Opacity probability distribution functions for electronic systems of CN and C2 molecules including their stellar isotopic forms.

    NASA Technical Reports Server (NTRS)

    Querci, F.; Kunde, V. G.; Querci, M.

    1971-01-01

    The basis and techniques are presented for generating opacity probability distribution functions for the CN molecule (red and violet systems) and the C2 molecule (Swan, Phillips, Ballik-Ramsay systems), two of the more important diatomic molecules in the spectra of carbon stars, with a view to including these distribution functions in equilibrium model atmosphere calculations. Comparisons to the CO molecule are also shown. T he computation of the monochromatic absorption coefficient uses the most recent molecular data with revision of the oscillator strengths for some of the band systems. The total molecular stellar mass absorption coefficient is established through fifteen equations of molecular dissociation equilibrium to relate the distribution functions to each other on a per gram of stellar material basis.

  3. A Method for Eliminating Beam Steering Error for the Modulated Absorption-Emission Thermometry Technique

    DTIC Science & Technology

    2015-01-01

    emissivity and the radiative intensity of the gas over a spectral band. The temperature is then calculated from the Planck function. The technique does not...pressure budget for cooling channels reduces pump horsepower and turbine inlet temperature DISTRIBUTION STATEMENT A – Approved for public release...distribution unlimited 4 Status of Modeling and Simulation • Existing data set for film cooling effectiveness consists of wall heat flux measurements • CFD

  4. Three-dimensional autoradiographic localization of quench-corrected glycine receptor specific activity in the mouse brain using sup 3 H-strychnine as the ligand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, W.F.; O'Gorman, S.; Roe, A.W.

    1990-03-01

    The autoradiographic analysis of neurotransmitter receptor distribution is a powerful technique that provides extensive information on the localization of neurotransmitter systems. Computer methodologies are described for the analysis of autoradiographic material which include quench correction, 3-dimensional display, and quantification based on anatomical boundaries determined from the tissue sections. These methodologies are applied to the problem of the distribution of glycine receptors measured by 3H-strychnine binding in the mouse CNS. The most distinctive feature of this distribution is its marked caudorostral gradient. The highest densities of binding sites within this gradient were seen in somatic motor and sensory areas; high densitiesmore » of binding were seen in branchial efferent and special sensory areas. Moderate levels were seen in nuclei related to visceral function. Densities within the reticular formation paralleled the overall gradient with high to moderate levels of binding. The colliculi had low and the diencephalon had very low levels of binding. No binding was seen in the cerebellum or the telencephalon with the exception of the amygdala, which had very low levels of specific binding. This distribution of glycine receptors correlates well with the known functional distribution of glycine synaptic function. These data are illustrated in 3 dimensions and discussed in terms of the significance of the analysis techniques on this type of data as well as the functional significance of the distribution of glycine receptors.« less

  5. Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions

    PubMed Central

    Li, Haoran; Xiong, Li; Jiang, Xiaoqian

    2014-01-01

    Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241

  6. Vibration monitoring of a helicopter blade model using the optical fiber distributed strain sensing technique.

    PubMed

    Wada, Daichi; Igawa, Hirotaka; Kasai, Tokio

    2016-09-01

    We demonstrate a dynamic distributed monitoring technique using a long-length fiber Bragg grating (FBG) interrogated by optical frequency domain reflectometry (OFDR) that measures strain at a speed of 150 Hz, spatial resolution of 1 mm, and measurement range of 20 m. A 5 m FBG is bonded to a 5.5 m helicopter blade model, and vibration is applied by the step relaxation method. The time domain responses of the strain distributions are measured, and the blade deflections are calculated based on the strain distributions. Frequency response functions are obtained using the time domain responses of the calculated deflection induced by the preload release, and the modal parameters are retrieved. Experimental results demonstrated the dynamic monitoring performances and the applicability to the modal analysis of the OFDR-FBG technique.

  7. A common distributed language approach to software integration

    NASA Technical Reports Server (NTRS)

    Antonelli, Charles J.; Volz, Richard A.; Mudge, Trevor N.

    1989-01-01

    An important objective in software integration is the development of techniques to allow programs written in different languages to function together. Several approaches are discussed toward achieving this objective and the Common Distributed Language Approach is presented as the approach of choice.

  8. Spherical Harmonic Analysis of Particle Velocity Distribution Function: Comparison of Moments and Anisotropies using Cluster Data

    NASA Technical Reports Server (NTRS)

    Gurgiolo, Chris; Vinas, Adolfo F.

    2009-01-01

    This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.

  9. Modulation transfer function measurement technique for small-pixel detectors

    NASA Technical Reports Server (NTRS)

    Marchywka, Mike; Socker, Dennis G.

    1992-01-01

    A modulation transfer function (MTF) measurement technique suitable for large-format, small-pixel detector characterization has been investigated. A volume interference grating is used as a test image instead of the bar or sine wave target images normally used. This technique permits a high-contrast, large-area, sinusoidal intensity distribution to illuminate the device being tested, avoiding the need to deconvolve raw data with imaging system characteristics. A high-confidence MTF result at spatial frequencies near 200 cycles/mm is obtained. We present results at several visible light wavelengths with a 6.8-micron-pixel CCD. Pixel response functions are derived from the MTF results.

  10. Techniques for the Cellular and Subcellular Localization of Endocannabinoid Receptors and Enzymes in the Mammalian Brain.

    PubMed

    Cristino, Luigia; Imperatore, Roberta; Di Marzo, Vincenzo

    2017-01-01

    This chapter attempts to piece together knowledge about new advanced microscopy techniques to study the neuroanatomical distribution of endocannabinoid receptors and enzymes at the level of cellular and subcellular structures and organelles in the brain. Techniques ranging from light to electron microscopy up to the new advanced LBM, PALM, and STORM super-resolution microscopy will be discussed in the context of their contribution to define the spatial distribution and organization of receptors and enzymes of the endocannabinoid system (ECS), and to better understand ECS brain functions. © 2017 Elsevier Inc. All rights reserved.

  11. Mesoscale mapping of available solar energy at the earth's surface by use of satellites

    NASA Technical Reports Server (NTRS)

    Hiser, H. W.; Senn, H. V.

    1980-01-01

    A method is presented for use of cloud images in the visual spectrum from the SMS/GOES geostationary satellites to determine the hourly distribution of sunshine on the mesoscale. Cloud coverage and density as a function of time of day and season are evaluated through the use of digital data processing techniques. Seasonal geographic distributions of cloud cover/sunshine are converted to joules of solar radiation received at the earth's surface through relationships developed from long-term measurements of these two parameters at six widely distributed stations. The technique can be used to generate maps showing the geographic distribution of total solar radiation on the mesoscale which is received at the earth's surface.

  12. The correlated k-distribution technique as applied to the AVHRR channels

    NASA Technical Reports Server (NTRS)

    Kratz, David P.

    1995-01-01

    Correlated k-distributions have been created to account for the molecular absorption found in the spectral ranges of the five Advanced Very High Resolution Radiometer (AVHRR) satellite channels. The production of the k-distributions was based upon an exponential-sum fitting of transmissions (ESFT) technique which was applied to reference line-by-line absorptance calculations. To account for the overlap of spectral features from different molecular species, the present routines made use of the multiplication transmissivity property which allows for considerable flexibility, especially when altering relative mixing ratios of the various molecular species. To determine the accuracy of the correlated k-distribution technique as compared to the line-by-line procedure, atmospheric flux and heating rate calculations were run for a wide variety of atmospheric conditions. For the atmospheric conditions taken into consideration, the correlated k-distribution technique has yielded results within about 0.5% for both the cases where the satellite spectral response functions were applied and where they were not. The correlated k-distribution's principal advantages is that it can be incorporated directly into multiple scattering routines that consider scattering as well as absorption by clouds and aerosol particles.

  13. Determination of a Limited Scope Network's Lightning Detection Efficiency

    NASA Technical Reports Server (NTRS)

    Rompala, John T.; Blakeslee, R.

    2008-01-01

    This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.

  14. Feasibility of automated dropsize distributions from holographic data using digital image processing techniques. [particle diameter measurement technique

    NASA Technical Reports Server (NTRS)

    Feinstein, S. P.; Girard, M. A.

    1979-01-01

    An automated technique for measuring particle diameters and their spatial coordinates from holographic reconstructions is being developed. Preliminary tests on actual cold-flow holograms of impinging jets indicate that a suitable discriminant algorithm consists of a Fourier-Gaussian noise filter and a contour thresholding technique. This process identifies circular as well as noncircular objects. The desired objects (in this case, circular or possibly ellipsoidal) are then selected automatically from the above set and stored with their parametric representations. From this data, dropsize distributions as a function of spatial coordinates can be generated and combustion effects due to hardware and/or physical variables studied.

  15. Nuclear risk analysis of the Ulysses mission

    NASA Astrophysics Data System (ADS)

    Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W., Dr.

    1991-01-01

    The use of a radioisotope thermoelectric generator fueled with plutonium-238 dioxide on the Space Shuttle-launched Ulysses mission implies some level of risk due to potential accidents. This paper describes the method used to quantify risks in the Ulysses mission Final Safety Analysis Report prepared for the U.S. Department of Energy. The starting point for the analysis described herein is following input of source term probability distributions from the General Electric Company. A Monte Carlo technique is used to develop probability distributions of radiological consequences for a range of accident scenarios thoughout the mission. Factors affecting radiological consequences are identified, the probability distribution of the effect of each factor determined, and the functional relationship among all the factors established. The probability distributions of all the factor effects are then combined using a Monte Carlo technique. The results of the analysis are presented in terms of complementary cumulative distribution functions (CCDF) by mission sub-phase, phase, and the overall mission. The CCDFs show the total probability that consequences (calculated health effects) would be equal to or greater than a given value.

  16. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  17. Controlling Release Kinetics of PLG Microspheres Using a Manufacturing Technique

    NASA Astrophysics Data System (ADS)

    Berchane, Nader

    2005-11-01

    Controlled drug delivery offers numerous advantages compared with conventional free dosage forms, in particular: improved efficacy and patient compliance. Emulsification is a widely used technique to entrap drugs in biodegradable microspheres for controlled drug delivery. The size of the formed microspheres has a significant influence on drug release kinetics. Despite the advantages of controlled drug delivery, previous attempts to achieve predetermined release rates have seen limited success. This study develops a tool to tailor desired release kinetics by combining microsphere batches of specified mean diameter and size distribution. A fluid mechanics based correlation that predicts the average size of Poly(Lactide-co-Glycolide) [PLG] microspheres from the manufacturing technique, is constructed and validated by comparison with experimental results. The microspheres produced are accurately represented by the Rosin-Rammler mathematical distribution function. A mathematical model is formulated that incorporates the microsphere distribution function to predict the release kinetics from mono-dispersed and poly-dispersed populations. Through this mathematical model, different release kinetics can be achieved by combining different sized populations in different ratios. The resulting design tool should prove useful for the pharmaceutical industry to achieve designer release kinetics.

  18. Modelling and validation of particle size distributions of supported nanoparticles using the pair distribution function technique

    DOE PAGES

    Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.; ...

    2017-04-13

    The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less

  19. Modelling and validation of particle size distributions of supported nanoparticles using the pair distribution function technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.

    The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less

  20. Generalized Green's function molecular dynamics for canonical ensemble simulations

    NASA Astrophysics Data System (ADS)

    Coluci, V. R.; Dantas, S. O.; Tewary, V. K.

    2018-05-01

    The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.

  1. Nanoscale electrical property studies of individual GeSi quantum rings by conductive scanning probe microscopy.

    PubMed

    Lv, Yi; Cui, Jian; Jiang, Zuimin M; Yang, Xinju

    2012-11-29

    The nanoscale electrical properties of individual self-assembled GeSi quantum rings (QRs) were studied by scanning probe microscopy-based techniques. The surface potential distributions of individual GeSi QRs are obtained by scanning Kelvin microscopy (SKM). Ring-shaped work function distributions are observed, presenting that the QRs' rim has a larger work function than the QRs' central hole. By combining the SKM results with those obtained by conductive atomic force microscopy and scanning capacitance microscopy, the correlations between the surface potential, conductance, and carrier density distributions are revealed, and a possible interpretation for the QRs' conductance distributions is suggested.

  2. Transient difference solutions of the inhomogeneous wave equation - Simulation of the Green's function

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1983-01-01

    A time-dependent finite difference formulation to the inhomogeneous wave equation is derived for plane wave propagation with harmonic noise sources. The difference equation and boundary conditions are developed along with the techniques to simulate the Dirac delta function associated with a concentrated noise source. Example calculations are presented for the Green's function and distributed noise sources. For the example considered, the desired Fourier transformed acoustic pressures are determined from the transient pressures by use of a ramping function and an integration technique, both of which eliminates the nonharmonic pressure associated with the initial transient.

  3. Transient difference solutions of the inhomogeneous wave equation: Simulation of the Green's function

    NASA Technical Reports Server (NTRS)

    Baumeiste, K. J.

    1983-01-01

    A time-dependent finite difference formulation to the inhomogeneous wave equation is derived for plane wave propagation with harmonic noise sources. The difference equation and boundary conditions are developed along with the techniques to simulate the Dirac delta function associated with a concentrated noise source. Example calculations are presented for the Green's function and distributed noise sources. For the example considered, the desired Fourier transformed acoustic pressures are determined from the transient pressures by use of a ramping function and an integration technique, both of which eliminates the nonharmonic pressure associated with the initial transient.

  4. Pion distribution amplitude from Euclidean correlation functions

    NASA Astrophysics Data System (ADS)

    Bali, Gunnar S.; Braun, Vladimir M.; Gläßle, Benjamin; Göckeler, Meinulf; Gruber, Michael; Hutzler, Fabian; Korcyl, Piotr; Lang, Bernhard; Schäfer, Andreas; Wein, Philipp; Zhang, Jian-Hui

    2018-03-01

    Following the proposal in (Braun and Müller. Eur Phys J C55:349, 2008), we study the feasibility to calculate the pion distribution amplitude (DA) from suitably chosen Euclidean correlation functions at large momentum. In our lattice study we employ the novel momentum smearing technique (Bali et al. Phys Rev D93:094515, 2016; Bali et al. Phys Lett B774:91, 2017). This approach is complementary to the calculations of the lowest moments of the DA using the Wilson operator product expansion and avoids mixing with lower dimensional local operators on the lattice. The theoretical status of this method is similar to that of quasi-distributions (Ji. Phys Rev Lett 110:262002, 2013) that have recently been used in (Zhang et al. Phys Rev D95:094514, 2017) to estimate the twist two pion DA. The similarities and differences between these two techniques are highlighted.

  5. Experimental verification of the shape of the excitation depth distribution function for AES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tougaard, S.; Jablonski, A.; Institute of Physical Chemistry, Polish Academy of Sciences, ul. Kasprzaka 44/52, 01-224 Warsaw

    2011-09-15

    In the common formalism of AES, it is assumed that the in-depth distribution of ionizations is uniform. There are experimental indications that this assumption may not be true for certain primary electron energies and solids. The term ''excitation depth distribution function'' (EXDDF) has been introduced to describe the distribution of ionizations at energies used in AES. This function is conceptually equivalent to the Phi-rho-z function of electron microprobe analysis (EPMA). There are, however, experimental difficulties to determine this function in particular for energies below {approx} 10 keV. In the present paper, we investigate the possibility of determining the shape ofmore » the EXDDF from the background of inelastically scattered electrons on the low energy side of the Auger electron features in the electron energy spectra. The experimentally determined EXDDFs are compared with the EXDDFs determined from Monte Carlo simulations of electron trajectories in solids. It is found that this technique is useful for the experimental determination of the EXDDF function.« less

  6. GRID3D-v2: An updated version of the GRID2D/3D computer program for generating grid systems in complex-shaped three-dimensional spatial domains

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.

    1991-01-01

    In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.

  7. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  8. A spherical harmonic approach for the determination of HCP texture from ultrasound: A solution to the inverse problem

    NASA Astrophysics Data System (ADS)

    Lan, Bo; Lowe, Michael J. S.; Dunne, Fionn P. E.

    2015-10-01

    A new spherical convolution approach has been presented which couples HCP single crystal wave speed (the kernel function) with polycrystal c-axis pole distribution function to give the resultant polycrystal wave speed response. The three functions have been expressed as spherical harmonic expansions thus enabling application of the de-convolution technique to enable any one of the three to be determined from knowledge of the other two. Hence, the forward problem of determination of polycrystal wave speed from knowledge of single crystal wave speed response and the polycrystal pole distribution has been solved for a broad range of experimentally representative HCP polycrystal textures. The technique provides near-perfect representation of the sensitivity of wave speed to polycrystal texture as well as quantitative prediction of polycrystal wave speed. More importantly, a solution to the inverse problem is presented in which texture, as a c-axis distribution function, is determined from knowledge of the kernel function and the polycrystal wave speed response. It has also been explained why it has been widely reported in the literature that only texture coefficients up to 4th degree may be obtained from ultrasonic measurements. Finally, the de-convolution approach presented provides the potential for the measurement of polycrystal texture from ultrasonic wave speed measurements.

  9. Analytic derivation of the next-to-leading order proton structure function F2p(x ,Q2) based on the Laplace transformation

    NASA Astrophysics Data System (ADS)

    Khanpour, Hamzeh; Mirjalili, Abolfazl; Tehrani, S. Atashbar

    2017-03-01

    An analytical solution based on the Laplace transformation technique for the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations is presented at next-to-leading order accuracy in perturbative QCD. This technique is also applied to extract the analytical solution for the proton structure function, F2p(x ,Q2) , in the Laplace s space. We present the results for the separate parton distributions of all parton species, including valence quark densities, the antiquark and strange sea parton distribution functions (PDFs), and the gluon distribution. We successfully compare the obtained parton distribution functions and the proton structure function with the results from GJR08 [Gluck, Jimenez-Delgado, and Reya, Eur. Phys. J. C 53, 355 (2008)], 10.1140/epjc/s10052-007-0462-9 and KKT12 [Khanpour, Khorramian, and Tehrani, J. Phys. G 40, 045002 (2013)], 10.1088/0954-3899/40/4/045002 parametrization models as well as the x -space results using QCDnum code. Our calculations show a very good agreement with the available theoretical models as well as the deep inelastic scattering (DIS) experimental data throughout the small and large values of x . The use of our analytical solution to extract the parton densities and the proton structure function is discussed in detail to justify the analysis method, considering the accuracy and speed of calculations. Overall, the accuracy we obtain from the analytical solution using the inverse Laplace transform technique is found to be better than 1 part in 104 to 105. We also present a detailed QCD analysis of nonsinglet structure functions using all available DIS data to perform global QCD fits. In this regard we employ the Jacobi polynomial approach to convert the results from Laplace s space to Bjorken x space. The extracted valence quark densities are also presented and compared to the JR14, MMHT14, NNPDF, and CJ15 PDFs sets. We evaluate the numerical effects of target mass corrections (TMCs) and higher twist (HT) terms on various structure functions, and compare fits to data with and without these corrections.

  10. Distribution coefficients of rare earth ions in cubic zirconium dioxide

    NASA Astrophysics Data System (ADS)

    Romer, H.; Luther, K.-D.; Assmus, W.

    1994-08-01

    Cubic zirconium dioxide crystals are grown with the skull melting technique. The effective distribution coefficients for Nd(exp 3+), Sm(exp 3+) and Er(sup 3+) as dopants are determined experimentally as a function of the crystal growth velocity. With the Burton-Prim-Slichter theory, the equilibrium distribution coefficients can be calculated. The distribution coefficients of all other trivalent rare earth ions can be estimated by applying the correlation towards the ionic radii.

  11. Pitfalls in Persuasion: How Do Users Experience Persuasive Techniques in a Web Service?

    NASA Astrophysics Data System (ADS)

    Segerståhl, Katarina; Kotro, Tanja; Väänänen-Vainio-Mattila, Kaisa

    Persuasive technologies are designed by utilizing a variety of interactive techniques that are believed to promote target behaviors. This paper describes a field study in which the aim was to discover possible pitfalls of persuasion, i.e., situations in which persuasive techniques do not function as expected. The study investigated persuasive functionality of a web service targeting weight loss. A qualitative online questionnaire was distributed through the web service and a total of 291 responses were extracted for interpretative analysis. The Persuasive Systems Design model (PSD) was used for supporting systematic analysis of persuasive functionality. Pitfalls were identified through situations that evoked negative user experiences. The primary pitfalls discovered were associated with manual logging of eating and exercise behaviors, appropriateness of suggestions and source credibility issues related to social facilitation. These pitfalls, when recognized, can be addressed in design by applying functional and facilitative persuasive techniques in meaningful combinations.

  12. Prediction of sound transmission loss through multilayered panels by using Gaussian distribution of directional incident energy

    PubMed

    Kang; Ih; Kim; Kim

    2000-03-01

    In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.

  13. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  14. Using hazard functions to assess changes in processing capacity in an attentional cuing paradigm.

    PubMed

    Wenger, Michael J; Gibson, Bradley S

    2004-08-01

    Processing capacity--defined as the relative ability to perform mental work in a unit of time--is a critical construct in cognitive psychology and is central to theories of visual attention. The unambiguous use of the construct, experimentally and theoretically, has been hindered by both conceptual confusions and the use of measures that are at best only coarsely mapped to the construct. However, more than 25 years ago, J. T. Townsend and F. G. Ashby (1978) suggested that the hazard function on the response time (RT) distribution offered a number of conceptual advantages as a measure of capacity. The present study suggests that a set of statistical techniques, well-known outside the cognitive and perceptual literatures, offers the ability to perform hypothesis tests on RT-distribution hazard functions. These techniques are introduced, and their use is illustrated in application to data from the contingent attentional capture paradigm.

  15. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    NASA Astrophysics Data System (ADS)

    Geloni, G.; Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    2004-08-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function.

  16. A probabilistic multi-criteria decision making technique for conceptual and preliminary aerospace systems design

    NASA Astrophysics Data System (ADS)

    Bandte, Oliver

    It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.

  17. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  18. The Application of Hardware in the Loop Testing for Distributed Engine Control

    NASA Technical Reports Server (NTRS)

    Thomas, George L.; Culley, Dennis E.; Brand, Alex

    2016-01-01

    The essence of a distributed control system is the modular partitioning of control function across a hardware implementation. This type of control architecture requires embedding electronics in a multitude of control element nodes for the execution of those functions, and their integration as a unified system. As the field of distributed aeropropulsion control moves toward reality, questions about building and validating these systems remain. This paper focuses on the development of hardware-in-the-loop (HIL) test techniques for distributed aero engine control, and the application of HIL testing as it pertains to potential advanced engine control applications that may now be possible due to the intelligent capability embedded in the nodes.

  19. Transmission of ˜ 10 keV electron beams through thin ceramic foils: Measurements and Monte Carlo simulations of electron energy distribution functions

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Heindl, T.; Skrobol, C.; Wieser, J.; Krücken, R.; Ulrich, A.

    2008-07-01

    Electron beams with particle energy of ~10 keV were sent through 300 nm thick ceramic (Si3N4 + SiO2) foils and the resulting electron energy distribution functions were recorded using a retarding grid technique. The results are compared with Monte Carlo simulations performed with two publicly available packages, Geant4 and Casino v2.42. It is demonstrated that Geant4, unlike Casino, provides electron energy distribution functions very similar to the experimental distributions. Both simulation packages provide a quite precise average energy of transmitted electrons: we demonstrate that the maximum uncertainty of the calculated values of the average energy is 6% for Geant4 and 8% for Casino, taking into account all systematic uncertainties and the discrepancies in the experimental and simulated data.

  20. Shining light on neurons--elucidation of neuronal functions by photostimulation.

    PubMed

    Eder, Matthias; Zieglgänsberger, Walter; Dodt, Hans-Ulrich

    2004-01-01

    Many neuronal functions can be elucidated by techniques that allow for a precise stimulation of defined regions of a neuron and its afferents. Photolytic release of neurotransmitters from 'caged' derivates in the vicinity of visualized neurons in living brain slices meets this request. This technique allows the study of the subcellular distribution and properties of functional native neurotransmitter receptors. These are prerequisites for a detailed analysis of the expression and spatial specificity of synaptic plasticity. Photostimulation can further be used to fast map the synaptic connectivity between nearby and, more importantly, distant cells in a neuronal network. Here we give a personal review of some of the technical aspects of photostimulation and recent findings, which illustrate the advantages of this technique.

  1. Identifying and Characterizing Kinetic Instabilities using Solar Wind Observations of Non-Maxwellian Plasmas

    NASA Astrophysics Data System (ADS)

    Klein, K. G.

    2016-12-01

    Weakly collisional plasmas, of the type typically observed in the solar wind, are commonly in a state other than local thermodynamic equilibrium. This deviation from a Maxwellian velocity distribution can be characterized by pressure anisotropies, disjoint beams streaming at differing speeds, leptokurtic distributions at large energies, and other non-thermal features. As these features may be artifacts of dynamic processes, including the the acceleration and expansion of the solar wind, and as the free energy contained in these features can drive kinetic micro-instabilities, accurate measurement and modeling of these features is essential for characterizing the solar wind. After a review of these features, a technique is presented for the efficient calculation of kinetic instabilities associated with a general, non-Maxwellian plasma. As a proof of principle, this technique is applied to bi-Maxwellian systems for which kinetic instability thresholds are known, focusing on parameter scans including beams and drifting heavy minor ions. The application of this technique to fits of velocity distribution functions from current, forthcoming, and proposed missions including WIND, DSCOVR, Solar Probe Plus, and THOR, as well as the underlying measured distribution functions, is discussed. Particular attention is paid to the effects of instrument pointing and integration time, as well as potential deviation between instabilities associated with the Maxwellian fits and those associated with the observed, potentially non-Maxwellian, velocity distribution. Such application may further illuminate the role instabilities play in the evolution of the solar wind.

  2. Poly (lactic-co-glycolic acid) particles prepared by microfluidics and conventional methods. Modulated particle size and rheology.

    PubMed

    Perez, Aurora; Hernández, Rebeca; Velasco, Diego; Voicu, Dan; Mijangos, Carmen

    2015-03-01

    Microfluidic techniques are expected to provide narrower particle size distribution than conventional methods for the preparation of poly (lactic-co-glycolic acid) (PLGA) microparticles. Besides, it is hypothesized that the particle size distribution of poly (lactic-co-glycolic acid) microparticles influences the settling behavior and rheological properties of its aqueous dispersions. For the preparation of PLGA particles, two different methods, microfluidic and conventional oil-in-water emulsification methods were employed. The particle size and particle size distribution of PLGA particles prepared by microfluidics were studied as a function of the flow rate of the organic phase while particles prepared by conventional methods were studied as a function of stirring rate. In order to study the stability and structural organization of colloidal dispersions, settling experiments and oscillatory rheological measurements were carried out on aqueous dispersions of PLGA particles with different particle size distributions. Microfluidics technique allowed the control of size and size distribution of the droplets formed in the process of emulsification. This resulted in a narrower particle size distribution for samples prepared by MF with respect to samples prepared by conventional methods. Polydisperse samples showed a larger tendency to aggregate, thus confirming the advantages of microfluidics over conventional methods, especially if biomedical applications are envisaged. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Distribution functions of air-scattered gamma rays above isotropic plane sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael, J A; Lamonds, H A

    1967-06-01

    Using the moments method of Spencer and Fano and a reconstruction technique suggested by Berger, the authors have calculated energy and angular distribution functions for air-scattered gamma rays emitied from infinite-plane isotropic monoenergetic sources as iunctions of source energy, radiation incidence angle at the detector, and detector altitude. Incremental and total buildup factors have been calculated for both number and exposure. The results are presented in tabular form for a detector located at altitudes of 3, 50, 100, 200, 300, 400, 500, and 1000 feet above source planes of 15 discrete energies spanning the range of 0.1 to 3.0 MeV.more » Calculational techniques including results of sensitivity studies are discussed and plots of typical results are presented. (auth)« less

  4. Product of Ginibre matrices: Fuss-Catalan and Raney distributions

    NASA Astrophysics Data System (ADS)

    Penson, Karol A.; Życzkowski, Karol

    2011-06-01

    Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions Ps(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions Ps(x) in terms of a combination of s hypergeometric functions of the type sFs-1. The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.

  5. Product of Ginibre matrices: Fuss-Catalan and Raney distributions.

    PubMed

    Penson, Karol A; Zyczkowski, Karol

    2011-06-01

    Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions P(s)(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions P(s)(x) in terms of a combination of s hypergeometric functions of the type (s)F(s-1). The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.

  6. Functional linear models for zero-inflated count data with application to modeling hospitalizations in patients on dialysis.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V

    2014-11-30

    We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Effect of dust size distribution on ion-acoustic solitons in dusty plasmas with different dust grains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Dong-Ning; Yang, Yang; Yan, Qiang

    Theoretical studies are carried out for ion acoustic solitons in multicomponent nonuniform plasma considering the dust size distribution. The Korteweg−de Vries equation for ion acoustic solitons is given by using the reductive perturbation technique. Two special dust size distributions are considered. The dependences of the width and amplitude of solitons on dust size parameters are shown. It is found that the properties of a solitary wave depend on the shape of the size distribution function of dust grains.

  8. Studies of transverse momentum dependent parton distributions and Bessel weighting

    DOE PAGES

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; ...

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less

  9. Precision Parameter Estimation and Machine Learning

    NASA Astrophysics Data System (ADS)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  10. Studies of transverse momentum dependent parton distributions and Bessel weighting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghasyan, M.; Avakian, H.; De Sanctis, E.

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less

  11. Diagnosis of Misalignment in Overhung Rotor using the K-S Statistic and A2 Test

    NASA Astrophysics Data System (ADS)

    Garikapati, Diwakar; Pacharu, RaviKumar; Munukurthi, Rama Satya Satyanarayana

    2018-02-01

    Vibration measurement at the bearings of rotating machinery has become a useful technique for diagnosing incipient fault conditions. In particular, vibration measurement can be used to detect unbalance in rotor, bearing failure, gear problems or misalignment between a motor shaft and coupled shaft. This is a particular problem encountered in turbines, ID fans and FD fans used for power generation. For successful fault diagnosis, it is important to adopt motor current signature analysis (MCSA) techniques capable of identifying the faults. It is also useful to develop techniques for inferring information such as the severity of fault. It is proposed that modeling the cumulative distribution function of motor current signals with respect to appropriate theoretical distributions, and quantifying the goodness of fit with the Kolmogorov-Smirnov (KS) statistic and A2 test offers a suitable signal feature for diagnosis. This paper demonstrates the successful comparison of the K-S feature and A2 test for discriminating the misalignment fault from normal function.

  12. Spatially distributed modal signals of free shallow membrane shell structronic system

    NASA Astrophysics Data System (ADS)

    Yue, H. H.; Deng, Z. Q.; Tzou, H. S.

    2008-11-01

    Based on the smart material and structronics technology, distributed sensor and control of shell structures have been rapidly developed for the last 20 years. This emerging technology has been utilized in aerospace, telecommunication, micro-electromechanical systems and other engineering applications. However, distributed monitoring technique and its resulting global spatially distributed sensing signals of shallow paraboloidal membrane shells are not clearly understood. In this paper, modeling of free flexible paraboloidal shell with spatially distributed sensor, micro-sensing signal characteristics, and location of distributed piezoelectric sensor patches are investigated based on a new set of assumed mode shape functions. Parametric analysis indicates that the signal generation depends on modal membrane strains in the meridional and circumferential directions in which the latter is more significant than the former, when all bending strains vanish in membrane shells. This study provides a modeling and analysis technique for distributed sensors laminated on lightweight paraboloidal flexible structures and identifies critical components and regions that generate significant signals.

  13. A novel method for the investigation of liquid/liquid distribution coefficients and interface permeabilities applied to the water-octanol-drug system.

    PubMed

    Stein, Paul C; di Cagno, Massimiliano; Bauer-Brandl, Annette

    2011-09-01

    In this work a new, accurate and convenient technique for the measurement of distribution coefficients and membrane permeabilities based on nuclear magnetic resonance (NMR) is described. This method is a novel implementation of localized NMR spectroscopy and enables the simultaneous analysis of the drug content in the octanol and in the water phase without separation. For validation of the method, the distribution coefficients at pH = 7.4 of four active pharmaceutical ingredients (APIs), namely ibuprofen, ketoprofen, nadolol, and paracetamol (acetaminophen), were determined using a classical approach. These results were compared to the NMR experiments which are described in this work. For all substances, the respective distribution coefficients found with the two techniques coincided very well. Furthermore, the NMR experiments make it possible to follow the distribution of the drug between the phases as a function of position and time. Our results show that the technique, which is available on any modern NMR spectrometer, is well suited to the measurement of distribution coefficients. The experiments present also new insight into the dynamics of the water-octanol interface itself and permit measurement of the interface permeability.

  14. Probability distribution of the entanglement across a cut at an infinite-randomness fixed point

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Majumdar, Satya N.; Huse, David A.

    2017-03-01

    We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.

  15. BATSE analysis techniques for probing the GRB spatial and luminosity distributions

    NASA Technical Reports Server (NTRS)

    Hakkila, Jon; Meegan, Charles A.

    1992-01-01

    The Burst And Transient Source Experiment (BATSE) has measured homogeneity and isotropy parameters from an increasingly large sample of observed gamma-ray bursts (GRBs), while also maintaining a summary of the way in which the sky has been sampled. Measurement of both of these are necessary for any study of the BATSE data statistically, as they take into account the most serious observational selection effects known in the study of GRBs: beam-smearing and inhomogeneous, anisotropic sky sampling. Knowledge of these effects is important to analysis of GRB angular and intensity distributions. In addition to determining that the bursts are local, it is hoped that analysis of such distributions will allow boundaries to be placed on the true GRB spatial distribution and luminosity function. The technique for studying GRB spatial and luminosity distributions is direct. Results of BATSE analyses are compared to Monte Carlo models parameterized by a variety of spatial and luminosity characteristics.

  16. Generalized quantum Fokker-Planck, diffusion, and Smoluchowski equations with true probability distribution functions.

    PubMed

    Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar

    2002-05-01

    Traditionally, quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasiprobability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using true probability distribution functions is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their coordinates and momenta, we derive a generalized quantum Langevin equation in c numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion, and Smoluchowski equations are the exact quantum analogs of their classical counterparts. The present work is independent of path integral techniques. The theory as developed here is a natural extension of its classical version and is valid for arbitrary temperature and friction (the Smoluchowski equation being considered in the overdamped limit).

  17. Probe measurements of the electron velocity distribution function in beams: Low-voltage beam discharge in helium

    NASA Astrophysics Data System (ADS)

    Sukhomlinov, V.; Mustafaev, A.; Timofeev, N.

    2018-04-01

    Previously developed methods based on the single-sided probe technique are altered and applied to measure the anisotropic angular spread and narrow energy distribution functions of charged particle (electron and ion) beams. The conventional method is not suitable for some configurations, such as low-voltage beam discharges, electron beams accelerated in near-wall and near-electrode layers, and vacuum electron beam sources. To determine the range of applicability of the proposed method, simple algebraic relationships between the charged particle energies and their angular distribution are obtained. The method is verified for the case of the collisionless mode of a low-voltage He beam discharge, where the traditional method for finding the electron distribution function with the help of a Legendre polynomial expansion is not applicable. This leads to the development of a physical model of the formation of the electron distribution function in a collisionless low-voltage He beam discharge. The results of a numerical calculation based on Monte Carlo simulations are in good agreement with the experimental data obtained using the new method.

  18. Novel approach for tomographic reconstruction of gas concentration distributions in air: Use of smooth basis functions and simulated annealing

    NASA Astrophysics Data System (ADS)

    Drescher, A. C.; Gadgil, A. J.; Price, P. N.; Nazaroff, W. W.

    Optical remote sensing and iterative computed tomography (CT) can be applied to measure the spatial distribution of gaseous pollutant concentrations. We conducted chamber experiments to test this combination of techniques using an open path Fourier transform infrared spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). Although ART converged to solutions that showed excellent agreement with the measured ray-integral concentrations, the solutions were inconsistent with simultaneously gathered point-sample concentration measurements. A new CT method was developed that combines (1) the superposition of bivariate Gaussians to represent the concentration distribution and (2) a simulated annealing minimization routine to find the parameters of the Gaussian basis functions that result in the best fit to the ray-integral concentration data. This method, named smooth basis function minimization (SBFM), generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present an analysis of two sets of experimental data that compares the performance of ART and SBFM. We conclude that SBFM is a superior CT reconstruction method for practical indoor and outdoor air monitoring applications.

  19. Weighted Parzen Windows for Pattern Classification

    DTIC Science & Technology

    1994-05-01

    Nearest-Neighbor Rule The k-Nearest-Neighbor ( kNN ) technique is nonparametric, assuming nothing about the distribution of the data. Stated succinctly...probabilities P(wj I x) from samples." Raudys and Jain [20:255] advance this interpretation by pointing out that the kNN technique can be viewed as the...34Parzen window classifier with a hyper- rectangular window function." As with the Parzen-window technique, the kNN classifier is more accurate as the

  20. Integral-moment analysis of the BATSE gamma-ray burst intensity distribution

    NASA Technical Reports Server (NTRS)

    Horack, John M.; Emslie, A. Gordon

    1994-01-01

    We have applied the technique of integral-moment analysis to the intensity distribution of the first 260 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory. This technique provides direct measurement of properties such as the mean, variance, and skewness of the convolved luminosity-number density distribution, as well as associated uncertainties. Using this method, one obtains insight into the nature of the source distributions unavailable through computation of traditional single parameters such as V/V(sub max)). If the luminosity function of the gamma-ray bursts is strongly peaked, giving bursts only a narrow range of luminosities, these results are then direct probes of the radial distribution of sources, regardless of whether the bursts are a local phenomenon, are distributed in a galactic halo, or are at cosmological distances. Accordingly, an integral-moment analysis of the intensity distribution of the gamma-ray bursts provides for the most complete analytic description of the source distribution available from the data, and offers the most comprehensive test of the compatibility of a given hypothesized distribution with observation.

  1. Rainbow Fourier Transform

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail D.; Cairns, Brian; Mishchenko, Michael I.

    2012-01-01

    We present a novel technique for remote sensing of cloud droplet size distributions. Polarized reflectances in the scattering angle range between 135deg and 165deg exhibit a sharply defined rainbow structure, the shape of which is determined mostly by single scattering properties of cloud particles, and therefore, can be modeled using the Mie theory. Fitting the observed rainbow with such a model (computed for a parameterized family of particle size distributions) has been used for cloud droplet size retrievals. We discovered that the relationship between the rainbow structures and the corresponding particle size distributions is deeper than it had been commonly understood. In fact, the Mie theory-derived polarized reflectance as a function of reduced scattering angle (in the rainbow angular range) and the (monodisperse) particle radius appears to be a proxy to a kernel of an integral transform (similar to the sine Fourier transform on the positive semi-axis). This approach, called the rainbow Fourier transform (RFT), allows us to accurately retrieve the shape of the droplet size distribution by the application of the corresponding inverse transform to the observed polarized rainbow. While the basis functions of the proxy-transform are not exactly orthogonal in the finite angular range, this procedure needs to be complemented by a simple regression technique, which removes the retrieval artifacts. This non-parametric approach does not require any a priori knowledge of the droplet size distribution functional shape and is computationally fast (no look-up tables, no fitting, computations are the same as for the forward modeling).

  2. Solutions to an advanced functional partial differential equation of the pantograph type

    PubMed Central

    Zaidi, Ali A.; Van Brunt, B.; Wake, G. C.

    2015-01-01

    A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained. PMID:26345391

  3. Solutions to an advanced functional partial differential equation of the pantograph type.

    PubMed

    Zaidi, Ali A; Van Brunt, B; Wake, G C

    2015-07-08

    A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained.

  4. Continuous distributions of specific ventilation recovered from inert gas washout

    NASA Technical Reports Server (NTRS)

    Lewis, S. M.; Evans, J. W.; Jalowayski, A. A.

    1978-01-01

    A new technique is described for recovering continuous distributions of ventilation as a function of tidal ventilation/volume ratio from the nitrogen washout. The analysis yields a continuous distribution of ventilation as a function of tidal ventilation/volume ratio represented as fractional ventilations of 50 compartments plus dead space. The procedure was verified by recovering known distributions from data to which noise had been added. Using an apparatus to control the subject's tidal volume and FRC, mixed expired N2 data gave the following results: (a) the distributions of young, normal subjects were narrow and unimodal; (b) those of subjects over age 40 were broader with more poorly ventilated units; (c) patients with pulmonary disease of all descriptions showed enlarged dead space; (d) patients with cystic fibrosis showed multimodal distributions with the bulk of the ventilation going to overventilated units; and (e) patients with obstructive lung disease fell into several classes, three of which are illustrated.

  5. Exploration and Modulation of Brain Network Interactions with Noninvasive Brain Stimulation in Combination with Neuroimaging

    PubMed Central

    Shafi, Mouhsin M.; Westover, M. Brandon; Fox, Michael D.; Pascual-Leone, Alvaro

    2012-01-01

    Much recent work in systems neuroscience has focused on how dynamic interactions between different cortical regions underlie complex brain functions such as motor coordination, language, and emotional regulation. Various studies using neuroimaging and neurophysiologic techniques have suggested that in many neuropsychiatric disorders, these dynamic brain networks are dysregulated. Here we review the utility of combined noninvasive brain stimulation and neuroimaging approaches towards greater understanding of dynamic brain networks in health and disease. Brain stimulation techniques, such as transcranial magnetic stimulation and transcranial direct current stimulation, use electromagnetic principles to noninvasively alter brain activity, and induce focal but also network effects beyond the stimulation site. When combined with brain imaging techniques such as functional MRI, PET and EEG, these brain stimulation techniques enable a causal assessment of the interaction between different network components, and their respective functional roles. The same techniques can also be applied to explore hypotheses regarding the changes in functional connectivity that occur during task performance and in various disease states such as stroke, depression and schizophrenia. Finally, in diseases characterized by pathologic alterations in either the excitability within a single region or in the activity of distributed networks, such techniques provide a potential mechanism to alter cortical network function and architectures in a beneficial manner. PMID:22429242

  6. Dynamic Bidirectional Reflectance Distribution Functions: Measurement and Representation

    DTIC Science & Technology

    2008-02-01

    be included in the harmonic fits. Other sets of orthogonal functions such as Zernike polynomials have also been used to characterize BRDF and could...reflectance spectra of 3D objects,” Proc. SPIE 4663, 370–378 2001. 13J. R. Shell II, C. Salvagio, and J. R. Schott, “A novel BRDF measurement technique

  7. Combining density functional theory (DFT) and pair distribution function (PDF) analysis to solve the structure of metastable materials: the case of metakaolin.

    PubMed

    White, Claire E; Provis, John L; Proffen, Thomas; Riley, Daniel P; van Deventer, Jannie S J

    2010-04-07

    Understanding the atomic structure of complex metastable (including glassy) materials is of great importance in research and industry, however, such materials resist solution by most standard techniques. Here, a novel technique combining thermodynamics and local structure is presented to solve the structure of the metastable aluminosilicate material metakaolin (calcined kaolinite) without the use of chemical constraints. The structure is elucidated by iterating between least-squares real-space refinement using neutron pair distribution function data, and geometry optimisation using density functional modelling. The resulting structural representation is both energetically feasible and in excellent agreement with experimental data. This accurate structural representation of metakaolin provides new insight into the local environment of the aluminium atoms, with evidence of the existence of tri-coordinated aluminium. By the availability of this detailed chemically feasible atomic description, without the need to artificially impose constraints during the refinement process, there exists the opportunity to tailor chemical and mechanical processes involving metakaolin and other complex metastable materials at the atomic level to obtain optimal performance at the macro-scale.

  8. Methods and apparatus for analysis of chromatographic migration patterns

    DOEpatents

    Stockham, Thomas G.; Ives, Jeffrey T.

    1993-01-01

    A method and apparatus for sharpening signal peaks in a signal representing the distribution of biological or chemical components of a mixture separated by a chromatographic technique such as, but not limited to, electrophoresis. A key step in the method is the use of a blind deconvolution technique, presently embodied as homomorphic filtering, to reduce the contribution of a blurring function to the signal encoding the peaks of the distribution. The invention further includes steps and apparatus directed to determination of a nucleotide sequence from a set of four such signals representing DNA sequence data derived by electrophoretic means.

  9. Enhanced interfaces for web-based enterprise-wide image distribution.

    PubMed

    Jost, R Gilbert; Blaine, G James; Fritz, Kevin; Blume, Hartwig; Sadhra, Sarbjit

    2002-01-01

    Modern Web browsers support image distribution with two shortcomings: (1) image grayscale presentation at client workstations is often sub-optimal and generally inconsistent with the presentation state on diagnostic workstations and (2) an Electronic Patient Record (EPR) application usually cannot directly access images with an integrated viewer. We have modified our EPR and our Web-based image-distribution system to allow access to images from within the EPR. In addition, at the client workstation, a grayscale transformation is performed that consists of two components: a client-display-specific component based on the characteristic display function of the class of display system, and a modality-specific transformation that is downloaded with every image. The described techniques have been implemented in our institution and currently support enterprise-wide clinical image distribution. The effectiveness of the techniques is reviewed.

  10. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  11. The Use of Efficient Broadcast Protocols in Asynchronous Distributed Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Schmuck, Frank Bernhard

    1988-01-01

    Reliable broadcast protocols are important tools in distributed and fault-tolerant programming. They are useful for sharing information and for maintaining replicated data in a distributed system. However, a wide range of such protocols has been proposed. These protocols differ in their fault tolerance and delivery ordering characteristics. There is a tradeoff between the cost of a broadcast protocol and how much ordering it provides. It is, therefore, desirable to employ protocols that support only a low degree of ordering whenever possible. This dissertation presents techniques for deciding how strongly ordered a protocol is necessary to solve a given application problem. It is shown that there are two distinct classes of application problems: problems that can be solved with efficient, asynchronous protocols, and problems that require global ordering. The concept of a linearization function that maps partially ordered sets of events to totally ordered histories is introduced. How to construct an asynchronous implementation that solves a given problem if a linearization function for it can be found is shown. It is proved that in general the question of whether a problem has an asynchronous solution is undecidable. Hence there exists no general algorithm that would automatically construct a suitable linearization function for a given problem. Therefore, an important subclass of problems that have certain commutativity properties are considered. Techniques for constructing asynchronous implementations for this class are presented. These techniques are useful for constructing efficient asynchronous implementations for a broad range of practical problems.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandes, P. A.; Lynch, K. A.

    Here, we define the observational parameter regime necessary for observing low-altitude ionospheric origins of high-latitude ion upflow/outflow. We present measurement challenges and identify a new analysis technique which mitigates these impediments. To probe the initiation of auroral ion upflow, it is necessary to examine the thermal ion population at 200-350 km, where typical thermal energies are tenths of eV. Interpretation of the thermal ion distribution function measurement requires removal of payload sheath and ram effects. We use a 3-D Maxwellian model to quantify how observed ionospheric parameters such as density, temperature, and flows affect in situ measurements of the thermalmore » ion distribution function. We define the viable acceptance window of a typical top-hat electrostatic analyzer in this regime and show that the instrument's energy resolution prohibits it from directly observing the shape of the particle spectra. To extract detailed information about measured particle population, we define two intermediate parameters from the measured distribution function, then use a Maxwellian model to replicate possible measured parameters for comparison to the data. Liouville's theorem and the thin-sheath approximation allow us to couple the measured and modeled intermediate parameters such that measurements inside the sheath provide information about plasma outside the sheath. We apply this technique to sounding rocket data to show that careful windowing of the data and Maxwellian models allows for extraction of the best choice of geophysical parameters. More widespread use of this analysis technique will help our community expand its observational database of the seed regions of ionospheric outflows.« less

  13. Time-resolved two-window measurement of Wigner functions for coherent backscatter from a turbid medium

    NASA Astrophysics Data System (ADS)

    Reil, Frank; Thomas, John E.

    2002-05-01

    For the first time we are able to observe the time-resolved Wigner function of enhanced backscatter from a random medium using a novel two-window technique. This technique enables us to directly verify the phase-conjugating properties of random media. An incident divergent beam displays a convergent enhanced backscatter cone. We measure the joint position and momentum (x, p) distributions of the light field as a function of propagation time in the medium. The two-window technique allows us to independently control the resolutions for position and momentum, thereby surpassing the uncertainty limit associated with Fourier transform pairs. By using a low-coherence light source in a heterodyne detection scheme, we observe enhanced backscattering resolved by path length in the random medium, providing information about the evolution of optical coherence as a function of penetration depth in the random medium.

  14. Modelling the bidirectional reflectance distribution function (BRDF) of seawater polluted by an oil film.

    PubMed

    Otremba, Zbigniew; Piskozub, Jacek

    2004-04-19

    The Bi-directional Reflectance Distribution Function (BRDF) of both clean seawaters and those polluted with oil film was determined using the Monte Carlo radiative transfer technique in which the spectrum of complex refractive index of Romashkino crude oil and the optical properties of case II water for chosen wavelengths was considered. The BRDF values were recorded for 1836 solid angular sectors of throughout the upper hemisphere. The visibility of areas polluted with oil observed from various directions and for various wavelengths is discussed.

  15. Inflation in random Gaussian landscapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu

    2017-05-01

    We develop analytic and numerical techniques for studying the statistics of slow-roll inflation in random Gaussian landscapes. As an illustration of these techniques, we analyze small-field inflation in a one-dimensional landscape. We calculate the probability distributions for the maximal number of e-folds and for the spectral index of density fluctuations n {sub s} and its running α {sub s} . These distributions have a universal form, insensitive to the correlation function of the Gaussian ensemble. We outline possible extensions of our methods to a large number of fields and to models of large-field inflation. These methods do not suffer frommore » potential inconsistencies inherent in the Brownian motion technique, which has been used in most of the earlier treatments.« less

  16. Intrapixel measurement techniques on large focal plane arrays for astronomical applications: a comparative study

    NASA Astrophysics Data System (ADS)

    Ketchazo, C.; Viale, T.; Boulade, O.; de la Barrière, F.; Dubreuil, D.; Mugnier, L.; Moreau, V.; Guérineau, N.; Mulet, P.; Druart, G.; Delisle, C.

    2017-09-01

    The intrapixel response is the signal detected by a single pixel illuminated by a Dirac distribution as a function of the position of this Dirac inside this pixel. It is also known as the pixel response function (PRF). This function measures the sensitivity variation at the subpixel scale and gives a spatial map of the sensitivity across a pixel.

  17. Spatial Signal Characteristics of Shallow Paraboloidal Shell Structronic Systems

    NASA Astrophysics Data System (ADS)

    Yue, H. H.; Deng, Z. Q.; Tzou, H. S.

    Based on the smart material and structronics technology, distributed sensor and control of shell structures have been rapidly developed for the last twenty years. This emerging technology has been utilized in aerospace, telecommunication, micro-electromechanical systems and other engineering applications. However, distributed monitoring technique and its resulting global spatially distributed sensing signals of thin flexible membrane shells are not clearly understood. In this paper, modeling of free thin paraboloidal shell with spatially distributed sensor, micro-sensing signal characteristics, and location of distributed piezoelectric sensor patches are investigated based on a new set of assumed mode shape functions. Parametric analysis indicates that the signal generation depends on modal membrane strains in the meridional and circumferential directions in which the latter is more significant than the former, when all bending strains vanish in membrane shells. This study provides a modeling and analysis technique for distributed sensors laminated on lightweight paraboloidal flexible structures and identifies critical components and regions that generate significant signals.

  18. NanoDesign: Concepts and Software for a Nanotechnology Based on Functionalized Fullerenes

    NASA Technical Reports Server (NTRS)

    Globus, Al; Jaffe, Richard; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Eric Drexler has proposed a hypothetical nanotechnology based on diamond and investigated the properties of such molecular systems. While attractive, diamonoid nanotechnology is not physically accessible with straightforward extensions of current laboratory techniques. We propose a nanotechnology based on functionalized fullerenes and investigate carbon nanotube based gears with teeth added via a benzyne reaction known to occur with C60. The gears are single-walled carbon nanotubes with appended coenzyme groups for teeth. Fullerenes are in widespread laboratory use and can be functionalized in many ways. Companion papers computationally demonstrate the properties of these gears (they appear to work) and the accessibility of the benzyne/nanotube reaction. This paper describes the molecular design techniques and rationale as well as the software that implements these design techniques. The software is a set of persistent C++ objects controlled by TCL command scripts. The c++/tcl interface is automatically generated by a software system called tcl_c++ developed by the author and described here. The objects keep track of different portions of the molecular machinery to allow different simulation techniques and boundary conditions to be applied as appropriate. This capability has been required to demonstrate (computationally) our gear's feasibility. A new distributed software architecture featuring a WWW universal client, CORBA distributed objects, and agent software is under consideration. The software architecture is intended to eventually enable a widely disbursed group to develop complex simulated molecular machines.

  19. Coherent optical determination of the leaf angle distribution of corn

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Pihlman, M.

    1981-01-01

    A coherent optical technique for the diffraction analysis of an image is presented. Developments in radar remote sensing shows a need to understand plant geometry and its relationship to plant moisture, soil moisture, and the radar backscattering coefficient. A corn plant changes its leaf angle distribution, as a function of time, from a uniform distribution to one that is strongly vertical. It is shown that plant and soil moisture may have an effect on plant geometry.

  20. Advanced three-dimensional electron microscopy techniques in the quest for better structural and functional materials

    PubMed Central

    Schryvers, D; Cao, S; Tirry, W; Idrissi, H; Van Aert, S

    2013-01-01

    After a short review of electron tomography techniques for materials science, this overview will cover some recent results on different shape memory and nanostructured metallic systems obtained by various three-dimensional (3D) electron imaging techniques. In binary Ni–Ti, the 3D morphology and distribution of Ni4Ti3 precipitates are investigated by using FIB/SEM slice-and-view yielding 3D data stacks. Different quantification techniques will be presented including the principal ellipsoid for a given precipitate, shape classification following a Zingg scheme, particle distribution function, distance transform and water penetration. The latter is a novel approach to quantifying the expected matrix transformation in between the precipitates. The different samples investigated include a single crystal annealed with and without compression yielding layered and autocatalytic precipitation, respectively, and a polycrystal revealing different densities and sizes of the precipitates resulting in a multistage transformation process. Electron tomography was used to understand the interaction between focused ion beam-induced Frank loops and long dislocation structures in nanobeams of Al exhibiting special mechanical behaviour measured by on-chip deposition. Atomic resolution electron tomography is demonstrated on Ag nanoparticles in an Al matrix. PMID:27877554

  1. Predictive modelling of grain-size distributions from marine electromagnetic profiling data using end-member analysis and a radial basis function network

    NASA Astrophysics Data System (ADS)

    Baasch, B.; Müller, H.; von Dobeneck, T.

    2018-07-01

    In this work, we present a new methodology to predict grain-size distributions from geophysical data. Specifically, electric conductivity and magnetic susceptibility of seafloor sediments recovered from electromagnetic profiling data are used to predict grain-size distributions along shelf-wide survey lines. Field data from the NW Iberian shelf are investigated and reveal a strong relation between the electromagnetic properties and grain-size distribution. The here presented workflow combines unsupervised and supervised machine-learning techniques. Non-negative matrix factorization is used to determine grain-size end-members from sediment surface samples. Four end-members were found, which well represent the variety of sediments in the study area. A radial basis function network modified for prediction of compositional data is then used to estimate the abundances of these end-members from the electromagnetic properties. The end-members together with their predicted abundances are finally back transformed to grain-size distributions. A minimum spatial variation constraint is implemented in the training of the network to avoid overfitting and to respect the spatial distribution of sediment patterns. The predicted models are tested via leave-one-out cross-validation revealing high prediction accuracy with coefficients of determination (R2) between 0.76 and 0.89. The predicted grain-size distributions represent the well-known sediment facies and patterns on the NW Iberian shelf and provide new insights into their distribution, transition and dynamics. This study suggests that electromagnetic benthic profiling in combination with machine learning techniques is a powerful tool to estimate grain-size distribution of marine sediments.

  2. Predictive modelling of grain size distributions from marine electromagnetic profiling data using end-member analysis and a radial basis function network

    NASA Astrophysics Data System (ADS)

    Baasch, B.; M"uller, H.; von Dobeneck, T.

    2018-04-01

    In this work we present a new methodology to predict grain-size distributions from geophysical data. Specifically, electric conductivity and magnetic susceptibility of seafloor sediments recovered from electromagnetic profiling data are used to predict grain-size distributions along shelf-wide survey lines. Field data from the NW Iberian shelf are investigated and reveal a strong relation between the electromagnetic properties and grain-size distribution. The here presented workflow combines unsupervised and supervised machine learning techniques. Nonnegative matrix factorisation is used to determine grain-size end-members from sediment surface samples. Four end-members were found which well represent the variety of sediments in the study area. A radial-basis function network modified for prediction of compositional data is then used to estimate the abundances of these end-members from the electromagnetic properties. The end-members together with their predicted abundances are finally back transformed to grain-size distributions. A minimum spatial variation constraint is implemented in the training of the network to avoid overfitting and to respect the spatial distribution of sediment patterns. The predicted models are tested via leave-one-out cross-validation revealing high prediction accuracy with coefficients of determination (R2) between 0.76 and 0.89. The predicted grain-size distributions represent the well-known sediment facies and patterns on the NW Iberian shelf and provide new insights into their distribution, transition and dynamics. This study suggests that electromagnetic benthic profiling in combination with machine learning techniques is a powerful tool to estimate grain-size distribution of marine sediments.

  3. A cross-correlation-based estimate of the galaxy luminosity function

    NASA Astrophysics Data System (ADS)

    van Daalen, Marcel P.; White, Martin

    2018-06-01

    We extend existing methods for using cross-correlations to derive redshift distributions for photometric galaxies, without using photometric redshifts. The model presented in this paper simultaneously yields highly accurate and unbiased redshift distributions and, for the first time, redshift-dependent luminosity functions, using only clustering information and the apparent magnitudes of the galaxies as input. In contrast to many existing techniques for recovering unbiased redshift distributions, the output of our method is not degenerate with the galaxy bias b(z), which is achieved by modelling the shape of the luminosity bias. We successfully apply our method to a mock galaxy survey and discuss improvements to be made before applying our model to real data.

  4. Quantifiable Assessment of SWNT Dispersion in Polymer Composites

    NASA Technical Reports Server (NTRS)

    Park, Cheol; Kim, Jae-Woo; Wise, Kristopher E.; Working, Dennis; Siochi, Mia; Harrison, Joycelyn; Gibbons, Luke; Siochi, Emilie J.; Lillehei, Peter T.; Cantrell, Sean; hide

    2007-01-01

    NASA LaRC has established a new protocol for visualizing the nanomaterials in structural polymer matrix resins. Using this new technique and reconstructing the 3D distribution of the nanomaterials allows us to compare this distribution against a theoretically perfect distribution. Additional tertiary structural information can now be obtained and quantified with the electron tomography studies. These tools will be necessary to establish the structural-functional relationships between the nano and the bulk. This will also help define the critical length scales needed for functional properties. Field ready tool development and calibration can begin by using these same samples and comparing the response. i.e. gold standards of good and bad dispersion.

  5. Principal Components Analysis on the spectral Bidirectional Reflectance Distribution Function of ceramic colour standards.

    PubMed

    Ferrero, A; Campos, J; Rabal, A M; Pons, A; Hernanz, M L; Corróns, A

    2011-09-26

    The Bidirectional Reflectance Distribution Function (BRDF) is essential to characterize an object's reflectance properties. This function depends both on the various illumination-observation geometries as well as on the wavelength. As a result, the comprehensive interpretation of the data becomes rather complex. In this work we assess the use of the multivariable analysis technique of Principal Components Analysis (PCA) applied to the experimental BRDF data of a ceramic colour standard. It will be shown that the result may be linked to the various reflection processes occurring on the surface, assuming that the incoming spectral distribution is affected by each one of these processes in a specific manner. Moreover, this procedure facilitates the task of interpolating a series of BRDF measurements obtained for a particular sample. © 2011 Optical Society of America

  6. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  7. Measurements of neutral and ion velocity distribution functions in a Hall thruster

    NASA Astrophysics Data System (ADS)

    Svarnas, Panagiotis; Romadanov, Iavn; Diallo, Ahmed; Raitses, Yevgeny

    2015-11-01

    Hall thruster is a plasma device for space propulsion. It utilizes a cross-field discharge to generate a partially ionized weakly collisional plasma with magnetized electrons and non-magnetized ions. The ions are accelerated by the electric field to produce the thrust. There is a relatively large number of studies devoted to characterization of accelerated ions, including measurements of ion velocity distribution function using laser-induced fluorescence diagnostic. Interactions of these accelerated ions with neutral atoms in the thruster and the thruster plume is a subject of on-going studies, which require combined monitoring of ion and neutral velocity distributions. Herein, laser-induced fluorescence technique has been employed to study neutral and single-charged ion velocity distribution functions in a 200 W cylindrical Hall thruster operating with xenon propellant. An optical system is installed in the vacuum chamber enabling spatially resolved axial velocity measurements. The fluorescence signals are well separated from the plasma background emission by modulating the laser beam and using lock-in detectors. Measured velocity distribution functions of neutral atoms and ions at different operating parameters of the thruster are reported and analyzed. This work was supported by DOE contract DE-AC02-09CH11466.

  8. Infra-red technique for cerebral blood flow: comparison with /sup 133/Xenon clearance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colacino, J.M.; Grubb, B.; Joebsis, F.F.

    A rapid infra-red optical technique has been developed for the measurement of cerebral blood flow. The method measures optical density changes across the intact skull during the passage of a bolus of the dye. Cardio-Green (CG). The clearance curves obtained for CG boluses are very short (less than 30 sec) in comparison with those obtained with tracers such as /sup 133/Xenon (10-30 min) that distribute into cerebral tissue. The volume of distribution of CG is totally intravascular, and the dye is relatively slowly cleared from the body. The important advantages of this spectrophotometric technique are its speed, versatility, and themore » avoidance of radioactive materials. The differential spectrophotometer used in this study, with trivial modifications, has been used to monitor changes in brain blood volume, oxygen saturation of hemoglobin, and cortical mitochondrial respiratory function, which illustrate the versatility of the technique for neurological assessments.« less

  9. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  10. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  11. Continuous Wave Ring-Down Spectroscopy Diagnostic for Measuring Argon Ion and Neutral Velocity Distribution Functions in a Helicon Plasma

    NASA Astrophysics Data System (ADS)

    McCarren, Dustin; Vandervort, Robert; Soderholm, Mark; Carr, Jerry, Jr.; Galante, Matthew; Magee, Richard; Scime, Earl

    2013-10-01

    Cavity Ring-Down Spectroscopy CRDS is a proven, ultra-sensitive, cavity enhanced absorption spectroscopy technique. When combined with a continuous wavelength (CW) diode laser that has a sufficiently narrow line width, the Doppler broadened absorption line, i.e., the velocity distribution functions (IVDFs), can be measured. Measurements of IVDFS can be made using established techniques, such as laser induced fluorescence (LIF). However, LIF suffers from the requirement that the initial state of the LIF sequence have a substantial density. This usually limits LIF to ions and atoms with large metastable state densities for the given plasma conditions. CW-CRDS is considerably more sensitive than LIF and can potentially be applied to much lower density populations of ion and atom states. In this work we present ongoing measurements of the CW-CRDS diagnostic and discuss the technical challenges of using CW-CRDS to make measurements in a helicon plasma.

  12. Electron Energy Distribution function in a weakly magnetized expanding helicon plasma discharge

    NASA Astrophysics Data System (ADS)

    Sirse, Nishant; Harvey, Cleo; Gaman, Cezar; Ellingboe, Bert

    2016-09-01

    Helicon wave heating is well known to produce high-density plasma source for application in plasma thrusters, plasma processing and many more. Our previous study (B Ellingboe et al. APS Gaseous Electronics Conference 2015, abstract #KW2.005) has shown observation of helicon wave in a weakly magnetized inductively coupled plasma source excited by m =0 antenna at 13.56 MHz. In this paper, we investigated the Electron Energy Distribution Function (EEDF) in the same setup by using an RF compensated Langmuir probe. The ac signal superimposition technique (second harmonic technique) is used to determine EEDF. The EEDF is measured for 5-100 mTorr gas pressure, 100 W - 1.5 kW rf power and at different locations in the source chamber, boundary and diffusion chamber. This paper will discuss the change in the shape of EEDF for various heating mode transitions.

  13. Markov chain Monte Carlo techniques applied to parton distribution functions determination: Proof of concept

    NASA Astrophysics Data System (ADS)

    Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane

    2017-07-01

    We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.

  14. Development of a multispectral autoradiography using a coded aperture

    NASA Astrophysics Data System (ADS)

    Noto, Daisuke; Takeda, Tohoru; Wu, Jin; Lwin, Thet T.; Yu, Quanwen; Zeniya, Tsutomu; Yuasa, Tetsuya; Hiranaka, Yukio; Itai, Yuji; Akatsuka, Takao

    2000-11-01

    Autoradiography is a useful imaging technique to understand biological functions using tracers including radio isotopes (RI's). However, it is not easy to describe the distribution of different kinds of tracers simultaneously by conventional autoradiography using X-ray film or Imaging plate. Each tracer describes each corresponding biological function. Therefore, if we can simultaneously estimate distribution of different kinds of tracer materials, the multispectral autoradiography must be a quite powerful tool to better understand physiological mechanisms of organs. So we are developing a system using a solid state detector (SSD) with high energy- resolution. Here, we introduce an imaging technique with a coded aperture to get spatial and spectral information more efficiently. In this paper, the imaging principle is described, and its validity and fundamental property are discussed by both simulation and phantom experiments with RI's such as 201Tl, 99mTc, 67Ga, and 123I.

  15. Evaluation of substitution monopole models for tire noise sound synthesis

    NASA Astrophysics Data System (ADS)

    Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.

    2010-01-01

    Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.

  16. Estimating crustal thickness and Vp/Vs ratio with joint constraints of receiver function and gravity data

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai

    2018-05-01

    The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.

  17. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    PubMed Central

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  18. Estimating neighborhood variability with a binary comparison matrix.

    USGS Publications Warehouse

    Murphy, D.L.

    1985-01-01

    A technique which utilizes a binary comparison matrix has been developed to implement a neighborhood function for a raster format data base. The technique assigns an index value to the center pixel of 3- by 3-pixel neighborhoods. The binary comparison matrix provides additional information not found in two other neighborhood variability statistics; the function is sensitive to both the number of classes within the neighborhood and the frequency of pixel occurrence in each of the classes. Application of the function to a spatial data base from the Kenai National Wildlife Refuge, Alaska, demonstrates 1) the numerical distribution of the index values, and 2) the spatial patterns exhibited by the numerical values. -Author

  19. Methods and apparatus for analysis of chromatographic migration patterns

    DOEpatents

    Stockham, T.G.; Ives, J.T.

    1993-12-28

    A method and apparatus are presented for sharpening signal peaks in a signal representing the distribution of biological or chemical components of a mixture separated by a chromatographic technique such as, but not limited to, electrophoresis. A key step in the method is the use of a blind deconvolution technique, presently embodied as homomorphic filtering, to reduce the contribution of a blurring function to the signal encoding the peaks of the distribution. The invention further includes steps and apparatus directed to determination of a nucleotide sequence from a set of four such signals representing DNA sequence data derived by electrophoretic means. 16 figures.

  20. Spectral analysis of pair-correlation bandwidth: application to cell biology images.

    PubMed

    Binder, Benjamin J; Simpson, Matthew J

    2015-02-01

    Images from cell biology experiments often indicate the presence of cell clustering, which can provide insight into the mechanisms driving the collective cell behaviour. Pair-correlation functions provide quantitative information about the presence, or absence, of clustering in a spatial distribution of cells. This is because the pair-correlation function describes the ratio of the abundance of pairs of cells, separated by a particular distance, relative to a randomly distributed reference population. Pair-correlation functions are often presented as a kernel density estimate where the frequency of pairs of objects are grouped using a particular bandwidth (or bin width), Δ>0. The choice of bandwidth has a dramatic impact: choosing Δ too large produces a pair-correlation function that contains insufficient information, whereas choosing Δ too small produces a pair-correlation signal dominated by fluctuations. Presently, there is little guidance available regarding how to make an objective choice of Δ. We present a new technique to choose Δ by analysing the power spectrum of the discrete Fourier transform of the pair-correlation function. Using synthetic simulation data, we confirm that our approach allows us to objectively choose Δ such that the appropriately binned pair-correlation function captures known features in uniform and clustered synthetic images. We also apply our technique to images from two different cell biology assays. The first assay corresponds to an approximately uniform distribution of cells, while the second assay involves a time series of images of a cell population which forms aggregates over time. The appropriately binned pair-correlation function allows us to make quantitative inferences about the average aggregate size, as well as quantifying how the average aggregate size changes with time.

  1. Towards an improved ensemble precipitation forecast: A probabilistic post-processing approach

    NASA Astrophysics Data System (ADS)

    Khajehei, Sepideh; Moradkhani, Hamid

    2017-03-01

    Recently, ensemble post-processing (EPP) has become a commonly used approach for reducing the uncertainty in forcing data and hence hydrologic simulation. The procedure was introduced to build ensemble precipitation forecasts based on the statistical relationship between observations and forecasts. More specifically, the approach relies on a transfer function that is developed based on a bivariate joint distribution between the observations and the simulations in the historical period. The transfer function is used to post-process the forecast. In this study, we propose a Bayesian EPP approach based on copula functions (COP-EPP) to improve the reliability of the precipitation ensemble forecast. Evaluation of the copula-based method is carried out by comparing the performance of the generated ensemble precipitation with the outputs from an existing procedure, i.e. mixed type meta-Gaussian distribution. Monthly precipitation from Climate Forecast System Reanalysis (CFS) and gridded observation from Parameter-Elevation Relationships on Independent Slopes Model (PRISM) have been employed to generate the post-processed ensemble precipitation. Deterministic and probabilistic verification frameworks are utilized in order to evaluate the outputs from the proposed technique. Distribution of seasonal precipitation for the generated ensemble from the copula-based technique is compared to the observation and raw forecasts for three sub-basins located in the Western United States. Results show that both techniques are successful in producing reliable and unbiased ensemble forecast, however, the COP-EPP demonstrates considerable improvement in the ensemble forecast in both deterministic and probabilistic verification, in particular in characterizing the extreme events in wet seasons.

  2. Electrostatic analyzer measurements of ionospheric thermal ion populations

    DOE PAGES

    Fernandes, P. A.; Lynch, K. A.

    2016-07-09

    Here, we define the observational parameter regime necessary for observing low-altitude ionospheric origins of high-latitude ion upflow/outflow. We present measurement challenges and identify a new analysis technique which mitigates these impediments. To probe the initiation of auroral ion upflow, it is necessary to examine the thermal ion population at 200-350 km, where typical thermal energies are tenths of eV. Interpretation of the thermal ion distribution function measurement requires removal of payload sheath and ram effects. We use a 3-D Maxwellian model to quantify how observed ionospheric parameters such as density, temperature, and flows affect in situ measurements of the thermalmore » ion distribution function. We define the viable acceptance window of a typical top-hat electrostatic analyzer in this regime and show that the instrument's energy resolution prohibits it from directly observing the shape of the particle spectra. To extract detailed information about measured particle population, we define two intermediate parameters from the measured distribution function, then use a Maxwellian model to replicate possible measured parameters for comparison to the data. Liouville's theorem and the thin-sheath approximation allow us to couple the measured and modeled intermediate parameters such that measurements inside the sheath provide information about plasma outside the sheath. We apply this technique to sounding rocket data to show that careful windowing of the data and Maxwellian models allows for extraction of the best choice of geophysical parameters. More widespread use of this analysis technique will help our community expand its observational database of the seed regions of ionospheric outflows.« less

  3. Improving Project Management with Simulation and Completion Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cates, Grant R.

    2004-01-01

    Despite the critical importance of project completion timeliness, management practices in place today remain inadequate for addressing the persistent problem of project completion tardiness. A major culprit in late projects is uncertainty, which most, if not all, projects are inherently subject to. This uncertainty resides in the estimates for activity durations, the occurrence of unplanned and unforeseen events, and the availability of critical resources. In response to this problem, this research developed a comprehensive simulation based methodology for conducting quantitative project completion time risk analysis. It is called the Project Assessment by Simulation Technique (PAST). This new tool enables project stakeholders to visualize uncertainty or risk, i.e. the likelihood of their project completing late and the magnitude of the lateness, by providing them with a completion time distribution function of their projects. Discrete event simulation is used within PAST to determine the completion distribution function for the project of interest. The simulation is populated with both deterministic and stochastic elements. The deterministic inputs include planned project activities, precedence requirements, and resource requirements. The stochastic inputs include activity duration growth distributions, probabilities for events that can impact the project, and other dynamic constraints that may be placed upon project activities and milestones. These stochastic inputs are based upon past data from similar projects. The time for an entity to complete the simulation network, subject to both the deterministic and stochastic factors, represents the time to complete the project. Repeating the simulation hundreds or thousands of times allows one to create the project completion distribution function. The Project Assessment by Simulation Technique was demonstrated to be effective for the on-going NASA project to assemble the International Space Station. Approximately $500 million per month is being spent on this project, which is scheduled to complete by 2010. NASA project stakeholders participated in determining and managing completion distribution functions produced from PAST. The first result was that project stakeholders improved project completion risk awareness. Secondly, using PAST, mitigation options were analyzed to improve project completion performance and reduce total project cost.

  4. Conductance fluctuation of edge-disordered graphene nanoribbons: Crossover from diffusive transport to Anderson localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takashima, Kengo; Yamamoto, Takahiro, E-mail: takahiro@rs.tus.ac.jp; Department of Liberal Arts

    Conductance fluctuation of edge-disordered graphene nanoribbons (ED-GNRs) is examined using the non-equilibrium Green's function technique combined with the extended Hückel approximation. The mean free path λ and the localization length ξ of the ED-GNRs are determined to classify the quantum transport regimes. In the diffusive regime where the length L{sub c} of the ED-GNRs is much longer than λ and much shorter than ξ, the conductance histogram is given by a Gaussian distribution function with universal conductance fluctuation. In the localization regime where L{sub c}≫ξ, the histogram is no longer the universal Gaussian distribution but a lognormal distribution that characterizesmore » Anderson localization.« less

  5. Isothermal enthalpy relaxation of glassy 1,2,6-hexanetriol

    NASA Astrophysics Data System (ADS)

    Fransson, Å.; Bäckström, G.

    The isothermal enthalpy relaxation of glassy 1,2,6-hexanetriol has been measured at six temperatures. The relaxation time and the distribution parameters extracted from fits of the Williams-Watts relaxation function are compared with parameters obtained by other techniques and on other substances. A detailed comparison of the Williams-Watts and the Davidson-Cole relaxation functions is presented.

  6. Quantum field theory in the presence of a medium: Green's function expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kheirandish, Fardin; Salimi, Shahriar

    2011-12-15

    Starting from a Lagrangian and using functional-integration techniques, series expansions of Green's function of a real scalar field and electromagnetic field, in the presence of a medium, are obtained. The parameter of expansion in these series is the susceptibility function of the medium. Relativistic and nonrelativistic Langevin-type equations are derived. Series expansions for Lifshitz energy in finite temperature and for an arbitrary matter distribution are derived. Covariant formulations for both scalar and electromagnetic fields are introduced. Two illustrative examples are given.

  7. Extreme deconvolution: Inferring complete distribution functions from noisy, heterogeneous and incomplete observations

    NASA Astrophysics Data System (ADS)

    Bovy Jo; Hogg, David W.; Roweis, Sam T.

    2011-06-01

    We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.

  8. Dynamic Task Assignment of Autonomous Distributed AGV in an Intelligent FMS Environment

    NASA Astrophysics Data System (ADS)

    Fauadi, Muhammad Hafidz Fazli Bin Md; Lin, Hao Wen; Murata, Tomohiro

    The need of implementing distributed system is growing significantly as it is proven to be effective for organization to be flexible against a highly demanding market. Nevertheless, there are still large technical gaps need to be addressed to gain significant achievement. We propose a distributed architecture to control Automated Guided Vehicle (AGV) operation based on multi-agent architecture. System architectures and agents' functions have been designed to support distributed control of AGV. Furthermore, enhanced agent communication protocol has been configured to accommodate dynamic attributes of AGV task assignment procedure. Result proved that the technique successfully provides a better solution.

  9. Unveiling saturation effects from nuclear structure function measurements at the EIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marquet, Cyrille; Moldes, Manoel R.; Zurita, Pia

    Here, we analyze the possibility of extracting a clear signal of non-linear parton saturation effects from future measurements of nuclear structure functions at the Electron–Ion Collider (EIC), in the small-x region. Our approach consists in generating pseudodata for electron-gold collisions, using the running-coupling Balitsky–Kovchegov evolution equation, and in assessing the compatibility of these saturated pseudodata with existing sets of nuclear parton distribution functions (nPDFs), extrapolated if necessary. The level of disagreement between the two is quantified by applying a Bayesian reweighting technique. This allows to infer the parton distributions needed in order to describe the pseudodata, which we find quitemore » different from the actual distributions, especially for sea quarks and gluons. This tension suggests that, should saturation effects impact the future nuclear structure function data as predicted, a successful refitting of the nPDFs may not be achievable, which would unambiguously signal the presence of non-linear effects.« less

  10. Unveiling saturation effects from nuclear structure function measurements at the EIC

    DOE PAGES

    Marquet, Cyrille; Moldes, Manoel R.; Zurita, Pia

    2017-07-21

    Here, we analyze the possibility of extracting a clear signal of non-linear parton saturation effects from future measurements of nuclear structure functions at the Electron–Ion Collider (EIC), in the small-x region. Our approach consists in generating pseudodata for electron-gold collisions, using the running-coupling Balitsky–Kovchegov evolution equation, and in assessing the compatibility of these saturated pseudodata with existing sets of nuclear parton distribution functions (nPDFs), extrapolated if necessary. The level of disagreement between the two is quantified by applying a Bayesian reweighting technique. This allows to infer the parton distributions needed in order to describe the pseudodata, which we find quitemore » different from the actual distributions, especially for sea quarks and gluons. This tension suggests that, should saturation effects impact the future nuclear structure function data as predicted, a successful refitting of the nPDFs may not be achievable, which would unambiguously signal the presence of non-linear effects.« less

  11. M13 Bacteriophage-Based Self-Assembly Structures and Their Functional Capabilities.

    PubMed

    Moon, Jong-Sik; Kim, Won-Geun; Kim, Chuntae; Park, Geun-Tae; Heo, Jeong; Yoo, So Y; Oh, Jin-Woo

    2015-06-01

    Controlling the assembly of basic structural building blocks in a systematic and orderly fashion is an emerging issue in various areas of science and engineering such as physics, chemistry, material science, biological engineering, and electrical engineering. The self-assembly technique, among many other kinds of ordering techniques, has several unique advantages and the M13 bacteriophage can be utilized as part of this technique. The M13 bacteriophage (Phage) can easily be modified genetically and chemically to demonstrate specific functions. This allows for its use as a template to determine the homogeneous distribution and percolated network structures of inorganic nanostructures under ambient conditions. Inexpensive and environmentally friendly synthesis can be achieved by using the M13 bacteriophage as a novel functional building block. Here, we discuss recent advances in the application of M13 bacteriophage self-assembly structures and the future of this technology.

  12. M13 Bacteriophage-Based Self-Assembly Structures and Their Functional Capabilities

    PubMed Central

    Moon, Jong-Sik; Kim, Won-Geun; Kim, Chuntae; Park, Geun-Tae; Heo, Jeong; Yoo, So Y; Oh, Jin-Woo

    2015-01-01

    Controlling the assembly of basic structural building blocks in a systematic and orderly fashion is an emerging issue in various areas of science and engineering such as physics, chemistry, material science, biological engineering, and electrical engineering. The self-assembly technique, among many other kinds of ordering techniques, has several unique advantages and the M13 bacteriophage can be utilized as part of this technique. The M13 bacteriophage (Phage) can easily be modified genetically and chemically to demonstrate specific functions. This allows for its use as a template to determine the homogeneous distribution and percolated network structures of inorganic nanostructures under ambient conditions. Inexpensive and environmentally friendly synthesis can be achieved by using the M13 bacteriophage as a novel functional building block. Here, we discuss recent advances in the application of M13 bacteriophage self-assembly structures and the future of this technology. PMID:26146494

  13. Discrete geometric analysis of message passing algorithm on graphs

    NASA Astrophysics Data System (ADS)

    Watanabe, Yusuke

    2010-04-01

    We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.

  14. Orbit Tomography: A Method for Determining the Population of Individual Fast-ion Orbits from Experimental Measurements

    NASA Astrophysics Data System (ADS)

    Stagner, L.; Heidbrink, W. W.

    2017-10-01

    Due to the complicated nature of the fast-ion distribution function, diagnostic velocity-space weight functions are used to analyze experimental data. In a technique known as Velocity-space Tomography (VST), velocity-space weight functions are combined with experimental measurements to create a system of linear equations that can be solved. However, VST (which by definition ignores spatial dependencies) is restricted, both by the accuracy of its forward model and also by the availability of spatially overlapping diagnostics. In this work we extend velocity-space weight functions to a full 6D generalized coordinate system and then show how to reduce them to a 3D orbit-space without loss of generality using an action-angle formulation. Furthermore, we show how diagnostic orbit-weight functions can be used to infer the full fast-ion distribution function, i.e. Orbit Tomography. Examples of orbit weights functions for different diagnostics and reconstructions of fast-ion distributions are shown for DIII-D experiments. This work was supported by the U.S. Department of Energy under DE-AC02-09CH11466 and DE-FC02-04ER54698.

  15. Temperature field determination in slabs, circular plates and spheres with saw tooth heat generating sources

    NASA Astrophysics Data System (ADS)

    Diestra Cruz, Heberth Alexander

    The Green's functions integral technique is used to determine the conduction heat transfer temperature field in flat plates, circular plates, and solid spheres with saw tooth heat generating sources. In all cases the boundary temperature is specified (Dirichlet's condition) and the thermal conductivity is constant. The method of images is used to find the Green's function in infinite solids, semi-infinite solids, infinite quadrants, circular plates, and solid spheres. The saw tooth heat generation source has been modeled using Dirac delta function and Heaviside step function. The use of Green's functions allows obtain the temperature distribution in the form of an integral that avoids the convergence problems of infinite series. For the infinite solid and the sphere, the temperature distribution is three-dimensional and in the cases of semi-infinite solid, infinite quadrant and circular plate the distribution is two-dimensional. The method used in this work is superior to other methods because it obtains elegant analytical or quasi-analytical solutions to complex heat conduction problems with less computational effort and more accuracy than the use of fully numerical methods.

  16. Measuring phonon mean free path distributions by probing quasiballistic phonon transport in grating nanostructures

    DOE PAGES

    Zeng, Lingping; Collins, Kimberlee C.; Hu, Yongjie; ...

    2015-11-27

    Heat conduction in semiconductors and dielectrics depends upon their phonon mean free paths that describe the average travelling distance between two consecutive phonon scattering events. Nondiffusive phonon transport is being exploited to extract phonon mean free path distributions. Here, we describe an implementation of a nanoscale thermal conductivity spectroscopy technique that allows for the study of mean free path distributions in optically absorbing materials with relatively simple fabrication and a straightforward analysis scheme. We pattern 1D metallic grating of various line widths but fixed gap size on sample surfaces. The metal lines serve as both heaters and thermometers in time-domainmore » thermoreflectance measurements and simultaneously act as wiregrid polarizers that protect the underlying substrate from direct optical excitation and heating. We demonstrate the viability of this technique by studying length-dependent thermal conductivities of silicon at various temperatures. The thermal conductivities measured with different metal line widths are analyzed using suppression functions calculated from the Boltzmann transport equation to extract the phonon mean free path distributions with no calibration required. Furthermore, this table-top ultrafast thermal transport spectroscopy technique enables the study of mean free path spectra in a wide range of technologically important materials.« less

  17. Algebraic grid generation with corner singularities

    NASA Technical Reports Server (NTRS)

    Vinokur, M.; Lombard, C. K.

    1983-01-01

    A simple noniterative algebraic procedure is presented for generating smooth computational meshes on a quadrilateral topology. Coordinate distribution and normal derivative are provided on all boundaries, one of which may include a slope discontinuity. The boundary conditions are sufficient to guarantee continuity of global meshes formed of joined patches generated by the procedure. The method extends to 3-D. The procedure involves a synthesis of prior techniques stretching functions, cubic blending functions, and transfinite interpolation - to which is added the functional form of the corner solution. The procedure introduces the concept of generalized blending, which is implemented as an automatic scaling of the boundary derivatives for effective interpolation. Some implications of the treatment at boundaries for techniques solving elliptic PDE's are discussed in an Appendix.

  18. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  19. Estimates of the information content and dimensionality of natural scenes from proximity distributions

    NASA Astrophysics Data System (ADS)

    Chandler, Damon M.; Field, David J.

    2007-04-01

    Natural scenes, like most all natural data sets, show considerable redundancy. Although many forms of redundancy have been investigated (e.g., pixel distributions, power spectra, contour relationships, etc.), estimates of the true entropy of natural scenes have been largely considered intractable. We describe a technique for estimating the entropy and relative dimensionality of image patches based on a function we call the proximity distribution (a nearest-neighbor technique). The advantage of this function over simple statistics such as the power spectrum is that the proximity distribution is dependent on all forms of redundancy. We demonstrate that this function can be used to estimate the entropy (redundancy) of 3×3 patches of known entropy as well as 8×8 patches of Gaussian white noise, natural scenes, and noise with the same power spectrum as natural scenes. The techniques are based on assumptions regarding the intrinsic dimensionality of the data, and although the estimates depend on an extrapolation model for images larger than 3×3, we argue that this approach provides the best current estimates of the entropy and compressibility of natural-scene patches and that it provides insights into the efficiency of any coding strategy that aims to reduce redundancy. We show that the sample of 8×8 patches of natural scenes used in this study has less than half the entropy of 8×8 white noise and less than 60% of the entropy of noise with the same power spectrum. In addition, given a finite number of samples (<220) drawn randomly from the space of 8×8 patches, the subspace of 8×8 natural-scene patches shows a dimensionality that depends on the sampling density and that for low densities is significantly lower dimensional than the space of 8×8 patches of white noise and noise with the same power spectrum.

  20. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, S; Wang, Y; Lue, K

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less

  1. Effects of two-temperature parameter and thermal nonlocal parameter on transient responses of a half-space subjected to ramp-type heating

    NASA Astrophysics Data System (ADS)

    Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng

    2017-07-01

    Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.

  2. Software For Integer Programming

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1992-01-01

    Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.

  3. Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.

    PubMed

    Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar

    2010-09-01

    A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed.

  4. Distributed Adaptive Fuzzy Control for Nonlinear Multiagent Systems Via Sliding Mode Observers.

    PubMed

    Shen, Qikun; Shi, Peng; Shi, Yan

    2016-12-01

    In this paper, the problem of distributed adaptive fuzzy control is investigated for high-order uncertain nonlinear multiagent systems on directed graph with a fixed topology. It is assumed that only the outputs of each follower and its neighbors are available in the design of its distributed controllers. Equivalent output injection sliding mode observers are proposed for each follower to estimate the states of itself and its neighbors, and an observer-based distributed adaptive controller is designed for each follower to guarantee that it asymptotically synchronizes to a leader with tracking errors being semi-globally uniform ultimate bounded, in which fuzzy logic systems are utilized to approximate unknown functions. Based on algebraic graph theory and Lyapunov function approach, using Filippov-framework, the closed-loop system stability analysis is conducted. Finally, numerical simulations are provided to illustrate the effectiveness and potential of the developed design techniques.

  5. Qualitative fusion technique based on information poor system and its application to factor analysis for vibration of rolling bearings

    NASA Astrophysics Data System (ADS)

    Xia, Xintao; Wang, Zhongyu

    2008-10-01

    For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.

  6. Large-deviation properties of Brownian motion with dry friction.

    PubMed

    Chen, Yaming; Just, Wolfram

    2014-10-01

    We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.

  7. Positron Emission Tomography: Human Brain Function and Biochemistry.

    ERIC Educational Resources Information Center

    Phelps, Michael E.; Mazziotta, John C.

    1985-01-01

    Describes the method, present status, and application of positron emission tomography (PET), an analytical imaging technique for "in vivo" measurements of the anatomical distribution and rates of specific biochemical reactions. Measurements and image dynamic biochemistry link basic and clinical neurosciences with clinical findings…

  8. Combined PDF and Rietveld studies of ADORable zeolites and the disordered intermediate IPC-1P.

    PubMed

    Morris, Samuel A; Wheatley, Paul S; Položij, Miroslav; Nachtigall, Petr; Eliášová, Pavla; Čejka, Jiří; Lucas, Tim C; Hriljac, Joseph A; Pinar, Ana B; Morris, Russell E

    2016-09-28

    The disordered intermediate of the ADORable zeolite UTL has been structurally confirmed using the pair distribution function (PDF) technique. The intermediate, IPC-1P, is a disordered layered compound formed by the hydrolysis of UTL in 0.1 M hydrochloric acid solution. Its structure is unsolvable by traditional X-ray diffraction techniques. The PDF technique was first benchmarked against high-quality synchrotron Rietveld refinements of IPC-2 (OKO) and IPC-4 (PCR) - two end products of IPC-1P condensation that share very similar structural features. An IPC-1P starting model derived from density functional theory was used for the PDF refinement, which yielded a final fit of Rw = 18% and a geometrically reasonable structure. This confirms the layers do stay intact throughout the ADOR process and shows PDF is a viable technique for layered zeolite structure determination.

  9. A kernel function method for computing steady and oscillatory supersonic aerodynamics with interference.

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1973-01-01

    The method presented uses a collocation technique with the nonplanar kernel function to solve supersonic lifting surface problems with and without interference. A set of pressure functions are developed based on conical flow theory solutions which account for discontinuities in the supersonic pressure distributions. These functions permit faster solution convergence than is possible with conventional supersonic pressure functions. An improper integral of a 3/2 power singularity along the Mach hyperbola of the nonplanar supersonic kernel function is described and treated. The method is compared with other theories and experiment for a variety of cases.

  10. Apodization of two-dimensional pupils with aberrations

    NASA Astrophysics Data System (ADS)

    Reddy, Andra Naresh Kumar; Hashemi, Mahdieh; Khonina, Svetlana Nikolaevna

    2018-06-01

    The technique proposed to enhance the resolution of the point spread function (PSF) of an optical system underneath defocussing and spherical aberrations. The method of approach is based on the amplitude and phase masking in a ring aperture for modifying the light intensity distribution in the Gaussian focal plane (YD = 0) and in the defocussed planes (YD= π and YD= 2π ). The width of the annulus modifies the distribution of the light intensity in the side lobes of the resultant PSF. In the presence of an asymmetry in the phase of the annulus, the Hanning amplitude apodizer [cos(π β ρ )] employed in the pupil function can modify the spatial distribution of light in the maximum defocussed plane ({Y}D = 2π ), results in PSF with improved resolution.

  11. Examination of the low-energy enhancement of the γ -ray strength function of Fe 56

    DOE PAGES

    Jones, M. D.; Macchiavelli, A. O.; Wiedeking, M.; ...

    2018-02-22

    A model-independent technique was used to determine the γ-ray strength function (γSF) of 56Fe down to γ-ray energies less than 1 MeV for the first time with GRETINA using the (p,p') reaction at 16 MeV. No difference was observed in the energy dependence of the γSF built on 2 + and 4 + final states, supporting the Brink hypothesis. In addition, angular distribution and polarization measurements were performed. The angular distributions are consistent with dipole radiation. In conclusion, the polarization results show a small bias towards magnetic character in the region of the enhancement.

  12. Examination of the low-energy enhancement of the γ -ray strength function of 56Fe

    NASA Astrophysics Data System (ADS)

    Jones, M. D.; Macchiavelli, A. O.; Wiedeking, M.; Bernstein, L. A.; Crawford, H. L.; Campbell, C. M.; Clark, R. M.; Cromaz, M.; Fallon, P.; Lee, I. Y.; Salathe, M.; Wiens, A.; Ayangeakaa, A. D.; Bleuel, D. L.; Bottoni, S.; Carpenter, M. P.; Davids, H. M.; Elson, J.; Görgen, A.; Guttormsen, M.; Janssens, R. V. F.; Kinnison, J. E.; Kirsch, L.; Larsen, A. C.; Lauritsen, T.; Reviol, W.; Sarantites, D. G.; Siem, S.; Voinov, A. V.; Zhu, S.

    2018-02-01

    A model-independent technique was used to determine the γ -ray strength function (γ SF ) of 56Fe down to γ -ray energies less than 1 MeV for the first time with GRETINA using the (p ,p') reaction at 16 MeV. No difference was observed in the energy dependence of the γ SF built on 2+ and 4+ final states, supporting the Brink hypothesis. In addition, angular distribution and polarization measurements were performed. The angular distributions are consistent with dipole radiation. The polarization results show a small bias towards magnetic character in the region of the enhancement.

  13. Examination of the low-energy enhancement of the γ -ray strength function of Fe 56

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, M. D.; Macchiavelli, A. O.; Wiedeking, M.

    A model-independent technique was used to determine the γ-ray strength function (γSF) of 56Fe down to γ-ray energies less than 1 MeV for the first time with GRETINA using the (p,p') reaction at 16 MeV. No difference was observed in the energy dependence of the γSF built on 2 + and 4 + final states, supporting the Brink hypothesis. In addition, angular distribution and polarization measurements were performed. The angular distributions are consistent with dipole radiation. In conclusion, the polarization results show a small bias towards magnetic character in the region of the enhancement.

  14. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  15. Petri net controllers for distributed robotic systems

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, George N.

    1992-01-01

    Petri nets are a well established modelling technique for analyzing parallel systems. When coupled with an event-driven operating system, Petri nets can provide an effective means for integrating and controlling the functions of distributed robotic applications. Recent work has shown that Petri net graphs can also serve as remarkably intuitive operator interfaces. In this paper, the advantages of using Petri nets as high-level controllers to coordinate robotic functions are outlined, the considerations for designing Petri net controllers are discussed, and simple Petri net structures for implementing an interface for operator supervision are presented. A detailed example is presented which illustrates these concepts for a sensor-based assembly application.

  16. The transmission of low frequency medical data using delta modulation techniques.

    NASA Technical Reports Server (NTRS)

    Arndt, G. D.; Dawson, C. T.

    1972-01-01

    The transmission of low-frequency medical data using delta modulation techniques is described. The delta modulators are used to distribute the low-frequency data into the passband of the telephone lines. Both adaptive and linear delta modulators are considered. Optimum bit rates to minimize distortion and intersymbol interference are discussed. Vibrocardiographic waves are analyzed as a function of bit rate and delta modulator configuration to determine their reproducibility for medical evaluation.

  17. Feynman graphs and the large dimensional limit of multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Di Martino, Sara; Facchi, Paolo; Florio, Giuseppe

    2018-01-01

    In this paper, we extend the analysis of multipartite entanglement, based on techniques from classical statistical mechanics, to a system composed of n d-level parties (qudits). We introduce a suitable partition function at a fictitious temperature with the average local purity of the system as Hamiltonian. In particular, we analyze the high-temperature expansion of this partition function, prove the convergence of the series, and study its asymptotic behavior as d → ∞. We make use of a diagrammatic technique, classify the graphs, and study their degeneracy. We are thus able to evaluate their contributions and estimate the moments of the distribution of the local purity.

  18. A technique for plasma velocity-space cross-correlation

    NASA Astrophysics Data System (ADS)

    Mattingly, Sean; Skiff, Fred

    2018-05-01

    An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.

  19. Mesoscopic Fluorescence Molecular Tomography for Evaluating Engineered Tissues.

    PubMed

    Ozturk, Mehmet S; Chen, Chao-Wei; Ji, Robin; Zhao, Lingling; Nguyen, Bao-Ngoc B; Fisher, John P; Chen, Yu; Intes, Xavier

    2016-03-01

    Optimization of regenerative medicine strategies includes the design of biomaterials, development of cell-seeding methods, and control of cell-biomaterial interactions within the engineered tissues. Among these steps, one paramount challenge is to non-destructively image the engineered tissues in their entirety to assess structure, function, and molecular expression. It is especially important to be able to enable cell phenotyping and monitor the distribution and migration of cells throughout the bulk scaffold. Advanced fluorescence microscopic techniques are commonly employed to perform such tasks; however, they are limited to superficial examination of tissue constructs. Therefore, the field of tissue engineering and regenerative medicine would greatly benefit from the development of molecular imaging techniques which are capable of non-destructive imaging of three-dimensional cellular distribution and maturation within a tissue-engineered scaffold beyond the limited depth of current microscopic techniques. In this review, we focus on an emerging depth-resolved optical mesoscopic imaging technique, termed laminar optical tomography (LOT) or mesoscopic fluorescence molecular tomography (MFMT), which enables longitudinal imaging of cellular distribution in thick tissue engineering constructs at depths of a few millimeters and with relatively high resolution. The physical principle, image formation, and instrumentation of LOT/MFMT systems are introduced. Representative applications in tissue engineering include imaging the distribution of human mesenchymal stem cells embedded in hydrogels, imaging of bio-printed tissues, and in vivo applications.

  20. A new approach to simulating collisionless dark matter fluids

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom; Kaehler, Ralf

    2013-09-01

    Recently, we have shown how current cosmological N-body codes already follow the fine grained phase-space information of the dark matter fluid. Using a tetrahedral tessellation of the three-dimensional manifold that describes perfectly cold fluids in six-dimensional phase space, the phase-space distribution function can be followed throughout the simulation. This allows one to project the distribution function into configuration space to obtain highly accurate densities, velocities and velocity dispersions. Here, we exploit this technique to show first steps on how to devise an improved particle-mesh technique. At its heart, the new method thus relies on a piecewise linear approximation of the phase-space distribution function rather than the usual particle discretization. We use pseudo-particles that approximate the masses of the tetrahedral cells up to quadrupolar order as the locations for cloud-in-cell (CIC) deposit instead of the particle locations themselves as in standard CIC deposit. We demonstrate that this modification already gives much improved stability and more accurate dynamics of the collisionless dark matter fluid at high force and low mass resolution. We demonstrate the validity and advantages of this method with various test problems as well as hot/warm dark matter simulations which have been known to exhibit artificial fragmentation. This completely unphysical behaviour is much reduced in the new approach. The current limitations of our approach are discussed in detail and future improvements are outlined.

  1. Development of a calculation method for estimating specific energy distribution in complex radiation fields.

    PubMed

    Sato, Tatsuhiko; Watanabe, Ritsuko; Niita, Koji

    2006-01-01

    Estimation of the specific energy distribution in a human body exposed to complex radiation fields is of great importance in the planning of long-term space missions and heavy ion cancer therapies. With the aim of developing a tool for this estimation, the specific energy distributions in liquid water around the tracks of several HZE particles with energies up to 100 GeV n(-1) were calculated by performing track structure simulation with the Monte Carlo technique. In the simulation, the targets were assumed to be spherical sites with diameters from 1 nm to 1 microm. An analytical function to reproduce the simulation results was developed in order to predict the distributions of all kinds of heavy ions over a wide energy range. The incorporation of this function into the Particle and Heavy Ion Transport code System (PHITS) enables us to calculate the specific energy distributions in complex radiation fields in a short computational time.

  2. Distributed adaptive asymptotically consensus tracking control of uncertain Euler-Lagrange systems under directed graph condition.

    PubMed

    Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin

    2017-11-01

    In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  4. Devices development and techniques research for space life sciences

    NASA Astrophysics Data System (ADS)

    Zhang, A.; Liu, B.; Zheng, C.

    The development process and the status quo of the devices and techniques for space life science in China and the main research results in this field achieved by Shanghai Institute of Technical Physics SITP CAS are reviewed concisely in this paper On the base of analyzing the requirements of devices and techniques for supporting space life science experiments and researches one designment idea of developing different intelligent modules with professional function standard interface and easy to be integrated into system is put forward and the realization method of the experiment system with intelligent distributed control based on the field bus are discussed in three hierarchies Typical sensing or control function cells with certain self-determination control data management and communication abilities are designed and developed which are called Intelligent Agents Digital hardware network system which are consisted of the distributed Agents as the intelligent node is constructed with the normative opening field bus technology The multitask and real-time control application softwares are developed in the embedded RTOS circumstance which is implanted into the system hardware and space life science experiment system platform with characteristic of multitasks multi-courses professional and instant integration will be constructed

  5. Computation of parton distributions from the quasi-PDF approach at the physical point

    NASA Astrophysics Data System (ADS)

    Alexandrou, Constantia; Bacchio, Simone; Cichy, Krzysztof; Constantinou, Martha; Hadjiyiannakou, Kyriakos; Jansen, Karl; Koutsou, Giannis; Scapellato, Aurora; Steffens, Fernanda

    2018-03-01

    We show the first results for parton distribution functions within the proton at the physical pion mass, employing the method of quasi-distributions. In particular, we present the matrix elements for the iso-vector combination of the unpolarized, helicity and transversity quasi-distributions, obtained with Nf = 2 twisted mass cloverimproved fermions and a proton boosted with momentum |p→| = 0.83 GeV. The momentum smearing technique has been applied to improve the overlap with the proton boosted state. Moreover, we present the renormalized helicity matrix elements in the RI' scheme, following the non-perturbative renormalization prescription recently developed by our group.

  6. Distribution of late gadolinium enhancement in various types of cardiomyopathies: Significance in differential diagnosis, clinical features and prognosis.

    PubMed

    Satoh, Hiroshi; Sano, Makoto; Suwa, Kenichiro; Saitoh, Takeji; Nobuhara, Mamoru; Saotome, Masao; Urushida, Tsuyoshi; Katoh, Hideki; Hayashi, Hideharu

    2014-07-26

    The recent development of cardiac magnetic resonance (CMR) techniques has allowed detailed analyses of cardiac function and tissue characterization with high spatial resolution. We review characteristic CMR features in ischemic and non-ischemic cardiomyopathies (ICM and NICM), especially in terms of the location and distribution of late gadolinium enhancement (LGE). CMR in ICM shows segmental wall motion abnormalities or wall thinning in a particular coronary arterial territory, and the subendocardial or transmural LGE. LGE in NICM generally does not correspond to any particular coronary artery distribution and is located mostly in the mid-wall to subepicardial layer. The analysis of LGE distribution is valuable to differentiate NICM with diffusely impaired systolic function, including dilated cardiomyopathy, end-stage hypertrophic cardiomyopathy (HCM), cardiac sarcoidosis, and myocarditis, and those with diffuse left ventricular (LV) hypertrophy including HCM, cardiac amyloidosis and Anderson-Fabry disease. A transient low signal intensity LGE in regions of severe LV dysfunction is a particular feature of stress cardiomyopathy. In arrhythmogenic right ventricular cardiomyopathy/dysplasia, an enhancement of right ventricular (RV) wall with functional and morphological changes of RV becomes apparent. Finally, the analyses of LGE distribution have potentials to predict cardiac outcomes and response to treatments.

  7. Classification of probability densities on the basis of Pearson?s curves with application to coronal heating simulations

    NASA Astrophysics Data System (ADS)

    Podladchikova, O.; Lefebvre, B.; Krasnoselskikh, V.; Podladchikov, V.

    An important task for the problem of coronal heating is to produce reliable evaluation of the statistical properties of energy release and eruptive events such as micro-and nanoflares in the solar corona. Different types of distributions for the peak flux, peak count rate measurements, pixel intensities, total energy flux or emission measures increases or waiting times have appeared in the literature. This raises the question of a precise evaluation and classification of such distributions. For this purpose, we use the method proposed by K. Pearson at the beginning of the last century, based on the relationship between the first 4 moments of the distribution. Pearson's technique encompasses and classifies a broad range of distributions, including some of those which have appeared in the literature about coronal heating. This technique is successfully applied to simulated data from the model of Krasnoselskikh et al. (2002). It allows to provide successful fits to the empirical distributions of the dissipated energy, and to classify them as a function of model parameters such as dissipation mechanisms and threshold.

  8. The multicategory case of the sequential Bayesian pixel selection and estimation procedure

    NASA Technical Reports Server (NTRS)

    Pore, M. D.; Dennis, T. B. (Principal Investigator)

    1980-01-01

    A Bayesian technique for stratified proportion estimation and a sampling based on minimizing the mean squared error of this estimator were developed and tested on LANDSAT multispectral scanner data using the beta density function to model the prior distribution in the two-class case. An extention of this procedure to the k-class case is considered. A generalization of the beta function is shown to be a density function for the general case which allows the procedure to be extended.

  9. Application of Model Based Parameter Estimation for Fast Frequency Response Calculations of Input Characteristics of Cavity-Backed Aperture Antennas Using Hybrid FEM/MoM Technique

    NASA Technical Reports Server (NTRS)

    Reddy C. J.

    1998-01-01

    Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.

  10. A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.

    PubMed

    Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W

    2005-01-01

    We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.

  11. Investigating effects of communications modulation technique on targeting performance

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Eusebio, Gerald; Huling, Edward

    2006-05-01

    One of the key challenges facing the global war on terrorism (GWOT) and urban operations is the increased need for rapid and diverse information from distributed sources. For users to get adequate information on target types and movements, they would need reliable data. In order to facilitate reliable computational intelligence, we seek to explore the communication modulation tradeoffs affecting information distribution and accumulation. In this analysis, we explore the modulation techniques of Orthogonal Frequency Division Multiplexing (OFDM), Direct Sequence Spread Spectrum (DSSS), and statistical time-division multiple access (TDMA) as a function of the bit error rate and jitter that affect targeting performance. In the analysis, we simulate a Link 16 with a simple bandpass frequency shift keying (PSK) technique using different Signal-to-Noise ratios. The communications transfer delay and accuracy tradeoffs are assessed as to the effects incurred in targeting performance.

  12. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  13. Mass Spectrometric and Synchrotron Radiation based techniques for the identification and distribution of painting materials in samples from paints of Josep Maria Sert

    PubMed Central

    2012-01-01

    Background Establishing the distribution of materials in paintings and that of their degradation products by imaging techniques is fundamental to understand the painting technique and can improve our knowledge on the conservation status of the painting. The combined use of chromatographic-mass spectrometric techniques, such as GC/MS or Py/GC/MS, and the chemical mapping of functional groups by imaging SR FTIR in transmission mode on thin sections and SR XRD line scans will be presented as a suitable approach to have a detailed characterisation of the materials in a paint sample, assuring their localisation in the sample build-up. This analytical approach has been used to study samples from Catalan paintings by Josep Maria Sert y Badía (20th century), a muralist achieving international recognition whose canvases adorned international buildings. Results The pigments used by the painter as well as the organic materials used as binders and varnishes could be identified by means of conventional techniques. The distribution of these materials by means of Synchrotron Radiation based techniques allowed to establish the mixtures used by the painter depending on the purpose. Conclusions Results show the suitability of the combined use of SR μFTIR and SR μXRD mapping and conventional techniques to unequivocally identify all the materials present in the sample and their localization in the sample build-up. This kind of approach becomes indispensable to solve the challenge of micro heterogeneous samples. The complementary interpretation of the data obtained with all the different techniques allowed the characterization of both organic and inorganic materials in the samples layer by layer as well as to establish the painting techniques used by Sert in the works-of-art under study. PMID:22616949

  14. TU-CD-BRA-11: Application of Bone Suppression Technique to Inspiratory/expiratory Chest Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, R; Sanada, S; Sakuta, K

    Purpose: The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images normally obtained by the dual-energy subtraction technique. This study was performed to investigate the usefulness of bone suppression technique in quantitative analysis of pulmonary function in inspiratory/expiratory chest radiography. Methods: Commercial bone suppression image processing software (ClearRead; Riverain Technologies) was applied to paired inspiratory/expiratory chest radiographs of 107 patients (normal, 33; abnormal, 74) to create corresponding bone suppression images. The abnormal subjects had been diagnosed with pulmonary diseases, such as pneumothorax, pneumonia, emphysema, asthma, and lung cancer.more » After recognition of the lung area, the vectors of respiratory displacement were measured in all local lung areas using a cross-correlation technique. The measured displacement in each area was visualized as displacement color maps. The distribution pattern of respiratory displacement was assessed by comparison with the findings of lung scintigraphy. Results: Respiratory displacement of pulmonary markings (soft tissues) was able to be quantified separately from the rib movements on bone suppression images. The resulting displacement map showed a left-right symmetric distribution increasing from the lung apex to the bottom region of the lung in many cases. However, patients with ventilatory impairments showed a nonuniform distribution caused by decreased displacement of pulmonary markings, which were confirmed to correspond to area with ventilatory impairments found on the lung scintigrams. Conclusion: The bone suppression technique was useful for quantitative analysis of respiratory displacement of pulmonary markings without any interruption of the rib shadows. Abnormal areas could be detected as decreased displacement of pulmonary markings. Inspiratory/expiratory chest radiography combined with the bone suppression technique has potential for predicting local lung function on the basis of dynamic analysis of pulmonary markings. This work was partially supported by Nakatani Foundation, Grant-in-aid for Scientific Research (C) of Ministry of Education, Culture, Sports, Science and Technology, JAPAN (Grant number : 24601007), and Nakatani Foundation, Mitsubishi Foundation, and the he Mitani Foundation for Research and Development. Yasushi Kishitani is a staff of TOYO corporation.« less

  15. Surface plasmon resonance sensing: from purified biomolecules to intact cells.

    PubMed

    Su, Yu-Wen; Wang, Wei

    2018-04-12

    Surface plasmon resonance (SPR) has become a well-recognized label-free technique for measuring the binding kinetics between biomolecules since the invention of the first SPR-based immunosensor in 1980s. The most popular and traditional format for SPR analysis is to monitor the real-time optical signals when a solution containing ligand molecules is flowing over a sensor substrate functionalized with purified receptor molecules. In recent years, rapid development of several kinds of SPR imaging techniques have allowed for mapping the dynamic distribution of local mass density within single living cells with high spatial and temporal resolutions and reliable sensitivity. Such capability immediately enabled one to investigate the interaction between important biomolecules and intact cells in a label-free, quantitative, and single cell manner, leading to an exciting new trend of cell-based SPR bioanalysis. In this Trend Article, we first describe the principle and technical features of two types of SPR imaging techniques based on prism and objective, respectively. Then we survey the intact cell-based applications in both fundamental cell biology and drug discovery. We conclude the article with comments and perspectives on the future developments. Graphical abstract Recent developments in surface plasmon resonance (SPR) imaging techniques allow for label-free mapping the mass-distribution within single living cells, leading to great expansions in biomolecular interactions studies from homogeneous substrates functionalized with purified biomolecules to heterogeneous substrates containing individual living cells.

  16. Non-invasive regime for language lateralization in right- and left-handers by means of functional MRI and dichotic listening.

    PubMed

    Hund-Georgiadis, Margret; Lex, Ulrike; Friederici, Angela D; von Cramon, D Yves

    2002-07-01

    Language lateralization was assessed by two independent functional techniques, fMRI and a dichotic listening test (DLT), in an attempt to establish a reliable and non-invasive protocol of dominance determination. This should particularly address the high intraindividual variability of language lateralization and allow decision-making in individual cases. Functional MRI of word classification tasks showed robust language lateralization in 17 right-handers and 17 left-handers in terms of activation in the inferior frontal gyrus. The DLT was introduced as a complementary tool to MR mapping for language dominance assessment, providing information on perceptual language processing located in superior temporal cortices. The overall agreement of lateralization assessment between the two techniques was 97.1%. Conflicting results were found in one subject, and diverging indices in ten further subjects. Increasing age, non-familial sinistrality, and a non-dominant writing hand were identified as the main factors explaining the observed mismatch between the two techniques. This finding stresses the concept of an intrahemispheric distribution of language function that is obviously associated with certain behavioral characteristics.

  17. Phase-based Bragg intragrating distributed strain sensor

    NASA Astrophysics Data System (ADS)

    Huang, S.; Ohn, M. M.; Measures, R. M.

    1996-03-01

    A strain-distribution sensing technique based on the measurement of the phase spectrum of the reflected light from a fiber-optic Bragg grating is described. When a grating is subject to a strain gradient, the grating will experience a chirp and therefore the resonant wavelength will vary along the grating, causing wavelength-dependent penetration depth. Because the group delay for each wavelength component is related to its penetration depth and the resonant wavelength is determined by strain, a measured phase spectrum can then indicate the local strain as a function of location within the grating. This phase-based Bragg grating sensing technique offers a powerful new means for studying some important effects over a few millimeters or centimeters in smart structures.

  18. Aerodynamic influence coefficient method using singularity splines.

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Weber, J. A.; Lesferd, E. P.

    1973-01-01

    A new numerical formulation with computed results, is presented. This formulation combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of the loading function methods. The formulation employs a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfies all of the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise (termed 'spline'). Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral.

  19. Statistical methods for investigating quiescence and other temporal seismicity patterns

    USGS Publications Warehouse

    Matthews, M.V.; Reasenberg, P.A.

    1988-01-01

    We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.

  20. Image analysis for the automated estimation of clonal growth and its application to the growth of smooth muscle cells.

    PubMed

    Gavino, V C; Milo, G E; Cornwell, D G

    1982-03-01

    Image analysis was used for the automated measurement of colony frequency (f) and colony diameter (d) in cultures of smooth muscle cells, Initial studies with the inverted microscope showed that number of cells (N) in a colony varied directly with d: log N = 1.98 log d - 3.469 Image analysis generated the complement of a cumulative distribution for f as a function of d. The number of cells in each segment of the distribution function was calculated by multiplying f and the average N for the segment. These data were displayed as a cumulative distribution function. The total number of colonies (fT) and the total number of cells (NT) were used to calculate the average colony size (NA). Population doublings (PD) were then expressed as log2 NA. Image analysis confirmed previous studies in which colonies were sized and counted with an inverted microscope. Thus, image analysis is a rapid and automated technique for the measurement of clonal growth.

  1. Extinction-sedimentation inversion technique for measuring size distribution of artificial fogs

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Vaughan, O. H.

    1978-01-01

    In measuring the size distribution of artificial fog particles, it is important that the natural state of the particles not be disturbed by the measuring device, such as occurs when samples are drawn through tubes. This paper describes a method for carrying out such a measurement by allowing the fog particles to settle in quiet air inside an enclosure through which traverses a parallel beam of light for measuring the optical depth as a function of time. An analytic function fit to the optical depth time decay curve can be directly inverted to yield the size distribution. Results of one such experiment performed on artificial fogs are shown as an example. The forwardscattering corrections to the measured extinction coefficient are also discussed with the aim of optimizing the experimental design so that the error due to forwardscattering is minimized.

  2. An adaptive grid scheme using the boundary element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munipalli, R.; Anderson, D.A.

    1996-09-01

    A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less

  3. An operator calculus for surface and volume modeling

    NASA Technical Reports Server (NTRS)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  4. Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, B. S.

    1972-01-01

    A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.

  5. Imaging the square of the correlated two-electron wave function of a hydrogen molecule

    DOE PAGES

    Waitz, M.; Bello, R. Y.; Metz, D.; ...

    2017-12-22

    The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in whichmore » electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Finally, our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.« less

  6. Imaging the square of the correlated two-electron wave function of a hydrogen molecule.

    PubMed

    Waitz, M; Bello, R Y; Metz, D; Lower, J; Trinter, F; Schober, C; Keiling, M; Lenz, U; Pitzer, M; Mertens, K; Martins, M; Viefhaus, J; Klumpp, S; Weber, T; Schmidt, L Ph H; Williams, J B; Schöffler, M S; Serov, V V; Kheifets, A S; Argenti, L; Palacios, A; Martín, F; Jahnke, T; Dörner, R

    2017-12-22

    The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in which electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.

  7. Imaging the square of the correlated two-electron wave function of a hydrogen molecule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waitz, M.; Bello, R. Y.; Metz, D.

    The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in whichmore » electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Finally, our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.« less

  8. Improvements in surface singularity analysis and design methods. [applicable to airfoils

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1979-01-01

    The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.

  9. Potential Role of Lung Ventilation Scintigraphy in the Assessment of COPD

    PubMed Central

    Cukic, Vesna; Begic, Amela

    2014-01-01

    Objective: To highlight the importance of the lung ventilation scintigraphy (LVS) to study the regional distribution of lung ventilation and to describe most frequent abnormal patterns of lung ventilation distribution obtained by this technique in COPD and to compare the information obtained by LVS with the that obtained by traditional lung function tests. Material and methods: The research was done in 20 patients with previously diagnosed COPD who were treated in Intensive care unit of Clinic for pulmonary diseases and TB “Podhrastovi” Clinical Center, University of Sarajevo in exacerbation of COPD during first three months of 2014. Each patient was undergone to testing of pulmonary function by body plethysmography and ventilation/perfusion lung scintigraphy with radio pharmaceutics Technegas, 111 MBq Tc -99m-MAA. We compared the results obtained by these two methods. Results: All patients with COPD have a damaged lung function tests examined by body plethysmography implying airflow obstruction, but LVS indicates not only airflow obstruction and reduced ventilation, but also indicates the disorders in distribution in lung ventilation. Conclusion: LVS may add further information to the functional evaluation of COPD to that provided by traditional lung function tests and may contribute to characterizing the different phenotypes of COPD. PMID:25132709

  10. Modeling the directional reflectance from complete homogeneous vegetation canopies with various leaf-orientation distributions

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.

    1984-01-01

    The directional-reflectance distributions of radiant flux from homogeneous vegetation canopies with greater than 90 percent ground cover are analyzed with a radiative-transfer model. The model assumes that the leaves consist of small finite planes with Lambertian properties. Four theoretical canopies with different leaf-orientation distributions were studied: erectophile, spherical, planophile, and heliotropic canopies. The directional-reflectance distributions from the model closely resemble reflectance distributions measured in the field. The physical scattering mechanisms operating in the model explain the variations observed in the reflectance distributions as a function of leaf-orientation distribution, solar zenith angle, and leaf transmittance and reflectance. The simulated reflectance distribution show unique characteristics for each canopy. The basic understanding of the physical scattering properties of the different canopy geometries gained in this study provide a basis for developing techniques to infer leaf-orientation distributions of vegetation canopies from directional remote-sensing measurements.

  11. Optimal coordinated voltage control in active distribution networks using backtracking search algorithm

    PubMed Central

    Tengku Hashim, Tengku Juhana; Mohamed, Azah

    2017-01-01

    The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate. PMID:28991919

  12. Optimal coordinated voltage control in active distribution networks using backtracking search algorithm.

    PubMed

    Tengku Hashim, Tengku Juhana; Mohamed, Azah

    2017-01-01

    The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate.

  13. Improved techniques for outgoing wave variational principle calculations of converged state-to-state transition probabilities for chemical reactions

    NASA Technical Reports Server (NTRS)

    Mielke, Steven L.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    Improved techniques and well-optimized basis sets are presented for application of the outgoing wave variational principle to calculate converged quantum mechanical reaction probabilities. They are illustrated with calculations for the reactions D + H2 yields HD + H with total angular momentum J = 3 and F + H2 yields HF + H with J = 0 and 3. The optimization involves the choice of distortion potential, the grid for calculating half-integrated Green's functions, the placement, width, and number of primitive distributed Gaussians, and the computationally most efficient partition between dynamically adapted and primitive basis functions. Benchmark calculations with 224-1064 channels are presented.

  14. Disentangling rotational velocity distribution of stars

    NASA Astrophysics Data System (ADS)

    Curé, Michel; Rial, Diego F.; Cassetti, Julia; Christen, Alejandra

    2017-11-01

    Rotational speed is an important physical parameter of stars: knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. However, rotational speed cannot be measured directly and is instead the convolution between the rotational speed and the sine of the inclination angle vsin(i). The problem itself can be described via a Fredhoml integral of the first kind. A new method (Curé et al. 2014) to deconvolve this inverse problem and obtain the cumulative distribution function for stellar rotational velocities is based on the work of Chandrasekhar & Münch (1950). Another method to obtain the probability distribution function is Tikhonov regularization method (Christen et al. 2016). The proposed methods can be also applied to the mass ratio distribution of extrasolar planets and brown dwarfs (in binary systems, Curé et al. 2015). For stars in a cluster, where all members are gravitationally bounded, the standard assumption that rotational axes are uniform distributed over the sphere is questionable. On the basis of the proposed techniques a simple approach to model this anisotropy of rotational axes has been developed with the possibility to ``disentangling'' simultaneously both the rotational speed distribution and the orientation of rotational axes.

  15. Electrical pulse generator

    DOEpatents

    Norris, Neil J.

    1979-01-01

    A technique for generating high-voltage, wide dynamic range, shaped electrical pulses in the nanosecond range. Two transmission lines are coupled together by resistive elements distributed along the length of the lines. The conductance of each coupling resistive element as a function of its position along the line is selected to produce the desired pulse shape in the output line when an easily produced pulse, such as a step function pulse, is applied to the input line.

  16. When is quasi-linear theory exact. [particle acceleration

    NASA Technical Reports Server (NTRS)

    Jones, F. C.; Birmingham, T. J.

    1975-01-01

    We use the cumulant expansion technique of Kubo (1962, 1963) to derive an integrodifferential equation for the average one-particle distribution function for particles being accelerated by electric and magnetic fluctuations of a general nature. For a very restricted class of fluctuations, the equation for this function degenerates exactly to a differential equation of Fokker-Planck type. Quasi-linear theory, including the adiabatic assumption, is an exact theory only for this limited class of fluctuations.

  17. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  18. Periodic bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde

    2006-05-01

    Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.

  19. Thermodynamic Identities and Symmetry Breaking in Short-Range Spin Glasses

    NASA Astrophysics Data System (ADS)

    Arguin, L.-P.; Newman, C. M.; Stein, D. L.

    2015-10-01

    We present a technique to generate relations connecting pure state weights, overlaps, and correlation functions in short-range spin glasses. These are obtained directly from the unperturbed Hamiltonian and hold for general coupling distributions. All are satisfied in phases with simple thermodynamic structure, such as the droplet-scaling and chaotic pairs pictures. If instead nontrivial mixed-state pictures hold, the relations suggest that replica symmetry is broken as described by a Derrida-Ruelle cascade, with pure state weights distributed as a Poisson-Dirichlet process.

  20. Intercommunications in Real Time, Redundant, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Zanger, H.

    1980-01-01

    An investigation into the applicability of fiber optic communication techniques to real time avionic control systems, in particular the total automatic flight control system used for the VSTOL aircraft is presented. The system consists of spatially distributed microprocessors. The overall control function is partitioned to yield a unidirectional data flow between the processing elements (PE). System reliability is enhanced by the use of triple redundancy. Some general overall system specifications are listed here to provide the necessary background for the requirements of the communications system.

  1. Stability, performance and sensitivity analysis of I.I.D. jump linear systems

    NASA Astrophysics Data System (ADS)

    Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven

    2018-06-01

    This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.

  2. A Debugger for Computational Grid Applications

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele

    2000-01-01

    The p2d2 project at NAS has built a debugger for applications running on heterogeneous computational grids. It employs a client-server architecture to simplify the implementation. Its user interface has been designed to provide process control and state examination functions on a computation containing a large number of processes. It can find processes participating in distributed computations even when those processes were not created under debugger control. These process identification techniques work both on conventional distributed executions as well as those on a computational grid.

  3. A three-dimensional muscle activity imaging technique for assessing pelvic muscle function

    NASA Astrophysics Data System (ADS)

    Zhang, Yingchun; Wang, Dan; Timm, Gerald W.

    2010-11-01

    A novel multi-channel surface electromyography (EMG)-based three-dimensional muscle activity imaging (MAI) technique has been developed by combining the bioelectrical source reconstruction approach and subject-specific finite element modeling approach. Internal muscle activities are modeled by a current density distribution and estimated from the intra-vaginal surface EMG signals with the aid of a weighted minimum norm estimation algorithm. The MAI technique was employed to minimally invasively reconstruct electrical activity in the pelvic floor muscles and urethral sphincter from multi-channel intra-vaginal surface EMG recordings. A series of computer simulations were conducted to evaluate the performance of the present MAI technique. With appropriate numerical modeling and inverse estimation techniques, we have demonstrated the capability of the MAI technique to accurately reconstruct internal muscle activities from surface EMG recordings. This MAI technique combined with traditional EMG signal analysis techniques is being used to study etiologic factors associated with stress urinary incontinence in women by correlating functional status of muscles characterized from the intra-vaginal surface EMG measurements with the specific pelvic muscle groups that generated these signals. The developed MAI technique described herein holds promise for eliminating the need to place needle electrodes into muscles to obtain accurate EMG recordings in some clinical applications.

  4. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  5. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  6. Limitations to mapping habitat-use areas in changing landscapes using the Mahalanobis distance statistic

    USGS Publications Warehouse

    Knick, Steven T.; Rotenberry, J.T.

    1998-01-01

    We tested the potential of a GIS mapping technique, using a resource selection model developed for black-tailed jackrabbits (Lepus californicus) and based on the Mahalanobis distance statistic, to track changes in shrubsteppe habitats in southwestern Idaho. If successful, the technique could be used to predict animal use areas, or those undergoing change, in different regions from the same selection function and variables without additional sampling. We determined the multivariate mean vector of 7 GIS variables that described habitats used by jackrabbits. We then ranked the similarity of all cells in the GIS coverage from their Mahalanobis distance to the mean habitat vector. The resulting map accurately depicted areas where we sighted jackrabbits on verification surveys. We then simulated an increase in shrublands (which are important habitats). Contrary to expectation, the new configurations were classified as lower similarity relative to the original mean habitat vector. Because the selection function is based on a unimodal mean, any deviation, even if biologically positive, creates larger Malanobis distances and lower similarity values. We recommend the Mahalanobis distance technique for mapping animal use areas when animals are distributed optimally, the landscape is well-sampled to determine the mean habitat vector, and distributions of the habitat variables does not change.

  7. Description of the control system design for the SSF PMAD DC testbed

    NASA Technical Reports Server (NTRS)

    Baez, Anastacio N.; Kimnach, Greg L.

    1991-01-01

    The Power Management and Distribution (PMAD) DC Testbed Control System for Space Station Freedom was developed using a top down approach based on classical control system and conventional terrestrial power utilities design techniques. The design methodology includes the development of a testbed operating concept. This operating concept describes the operation of the testbed under all possible scenarios. A unique set of operating states was identified and a description of each state, along with state transitions, was generated. Each state is represented by a unique set of attributes and constraints, and its description reflects the degree of system security within which the power system is operating. Using the testbed operating states description, a functional design for the control system was developed. This functional design consists of a functional outline, a text description, and a logical flowchart for all the major control system functions. Described here are the control system design techniques, various control system functions, and the status of the design and implementation.

  8. Distribution of late gadolinium enhancement in various types of cardiomyopathies: Significance in differential diagnosis, clinical features and prognosis

    PubMed Central

    Satoh, Hiroshi; Sano, Makoto; Suwa, Kenichiro; Saitoh, Takeji; Nobuhara, Mamoru; Saotome, Masao; Urushida, Tsuyoshi; Katoh, Hideki; Hayashi, Hideharu

    2014-01-01

    The recent development of cardiac magnetic resonance (CMR) techniques has allowed detailed analyses of cardiac function and tissue characterization with high spatial resolution. We review characteristic CMR features in ischemic and non-ischemic cardiomyopathies (ICM and NICM), especially in terms of the location and distribution of late gadolinium enhancement (LGE). CMR in ICM shows segmental wall motion abnormalities or wall thinning in a particular coronary arterial territory, and the subendocardial or transmural LGE. LGE in NICM generally does not correspond to any particular coronary artery distribution and is located mostly in the mid-wall to subepicardial layer. The analysis of LGE distribution is valuable to differentiate NICM with diffusely impaired systolic function, including dilated cardiomyopathy, end-stage hypertrophic cardiomyopathy (HCM), cardiac sarcoidosis, and myocarditis, and those with diffuse left ventricular (LV) hypertrophy including HCM, cardiac amyloidosis and Anderson-Fabry disease. A transient low signal intensity LGE in regions of severe LV dysfunction is a particular feature of stress cardiomyopathy. In arrhythmogenic right ventricular cardiomyopathy/dysplasia, an enhancement of right ventricular (RV) wall with functional and morphological changes of RV becomes apparent. Finally, the analyses of LGE distribution have potentials to predict cardiac outcomes and response to treatments. PMID:25068019

  9. Computing distance distributions from dipolar evolution data with overtones: RIDME spectroscopy with Gd(iii)-based spin labels.

    PubMed

    Keller, Katharina; Mertens, Valerie; Qi, Mian; Nalepa, Anna I; Godt, Adelheid; Savitsky, Anton; Jeschke, Gunnar; Yulikov, Maxim

    2017-07-21

    Extraction of distance distributions between high-spin paramagnetic centers from relaxation induced dipolar modulation enhancement (RIDME) data is affected by the presence of overtones of dipolar frequencies. As previously proposed, we account for these overtones by using a modified kernel function in Tikhonov regularization analysis. This paper analyzes the performance of such an approach on a series of model compounds with the Gd(iii)-PyMTA complex serving as paramagnetic high-spin label. We describe the calibration of the overtone coefficients for the RIDME kernel, demonstrate the accuracy of distance distributions obtained with this approach, and show that for our series of Gd-rulers RIDME technique provides more accurate distance distributions than Gd(iii)-Gd(iii) double electron-electron resonance (DEER). The analysis of RIDME data including harmonic overtones can be performed using the MATLAB-based program OvertoneAnalysis, which is available as open-source software from the web page of ETH Zurich. This approach opens a perspective for the routine use of the RIDME technique with high-spin labels in structural biology and structural studies of other soft matter.

  10. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-03-08

    Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  11. The Bivariate Luminosity--HI Mass Distribution Function of Galaxies based on the NIBLES Survey

    NASA Astrophysics Data System (ADS)

    Butcher, Zhon; Schneider, Stephen E.; van Driel, Wim; Lehnert, Matt

    2016-01-01

    We use 21cm HI line observations for 2610 galaxies from the Nançay Interstellar Baryons Legacy Extragalactic Survey (NIBLES) to derive a bivariate luminosity--HI mass distribution function. Our HI survey was selected to randomly probe the local (900 < cz < 12,000 km/s) galaxy population in each 0.5 mag wide bin for the absolute z-band magnitude range of -13.5 < Mz < -24 without regard to morphology or color. This targeted survey allowed more on-source integration time for weak and non-detected sources, enabling us to probe lower HI mass fractions and apply lower upper limits for non-detections than would be possible with the larger blind HI surveys. Additionally, we obtained a factor of four higher sensitivity follow-up observations at Arecibo of 90 galaxies from our non-detected and marginally detected categories to quantify the underlying HI distribution of sources not detected at Nançay. Using the optical luminosity function and our higher sensitivity follow up observations as priors, we use a 2D stepwise maximum likelihood technique to derive the two dimensional volume density distribution of luminosity and HI mass in each SDSS band.

  12. Towards a Full Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2015-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.

  13. Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting

    NASA Astrophysics Data System (ADS)

    Gamberg, Leonard

    2015-04-01

    We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.

  14. Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting

    NASA Astrophysics Data System (ADS)

    Gamberg, Leonard

    2015-10-01

    We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.

  15. Settling Efficiency of Urban Particulate Matter Transported by Stormwater Runoff.

    PubMed

    Carbone, Marco; Penna, Nadia; Piro, Patrizia

    2015-09-01

    The main purpose of control measures in urban areas is to retain particulate matter washed out by stormwater over impermeable surfaces. In stormwater control measures, particulate matter removal typically occurs via sedimentation. Settling column tests were performed to examine the settling efficiency of such units using monodisperse and heterodisperse particulate matter (for which the particle size distributions were measured and modelled by the cumulative gamma distribution). To investigate the dependence of settling efficiency from the particulate matter, a variant of the evolutionary polynomial regression (EPR), a Microsoft Excel function based on multi-objective EPR technique (EPR-MOGA), called EPR MOGA XL, was used as a data-mining strategy. The results from this study have shown that settling efficiency is a function of the initial total suspended solids (TSS) concentration and of the median diameter (d50 index), obtained from the particle size distributions (PSDs) of the samples.

  16. On soft clipping of Zernike moments for deblurring and enhancement of optical point spread functions

    NASA Astrophysics Data System (ADS)

    Becherer, Nico; Jödicke, Hanna; Schlosser, Gregor; Hesser, Jürgen; Zeilfelder, Frank; Männer, Reinhard

    2006-02-01

    Blur and noise originating from the physical imaging processes degrade the microscope data. Accurate deblurring techniques require, however, an accurate estimation of the underlying point-spread function (PSF). A good representation of PSFs can be achieved by Zernike Polynomials since they offer a compact representation where low-order coefficients represent typical aberrations of optical wavefronts while noise is represented in higher order coefficients. A quantitative description of the noise distribution (Gaussian) over the Zernike moments of various orders is given which is the basis for the new soft clipping approach for denoising of PSFs. Instead of discarding moments beyond a certain order, those Zernike moments that are more sensitive to noise are dampened according to the measured distribution and the present noise model. Further, a new scheme to combine experimental and theoretical PSFs in Zernike space is presented. According to our experimental reconstructions, using the new improved PSF the correlation between reconstructed and original volume is raised by 15% on average cases and up to 85% in the case of thin fibre structures, compared to reconstructions where a non improved PSF was used. Finally, we demonstrate the advantages of our approach on 3D images of confocal microscopes by generating visually improved volumes. Additionally, we are presenting a method to render the reconstructed results using a new volume rendering method that is almost artifact-free. The new approach is based on a Shear-Warp technique, wavelet data encoding techniques and a recent approach to approximate the gray value distribution by a Super spline model.

  17. From a meso- to micro-scale connectome: array tomography and mGRASP

    PubMed Central

    Rah, Jong-Cheol; Feng, Linqing; Druckmann, Shaul; Lee, Hojin; Kim, Jinhyun

    2015-01-01

    Mapping mammalian synaptic connectivity has long been an important goal of neuroscience because knowing how neurons and brain areas are connected underpins an understanding of brain function. Meeting this goal requires advanced techniques with single synapse resolution and large-scale capacity, especially at multiple scales tethering the meso- and micro-scale connectome. Among several advanced LM-based connectome technologies, Array Tomography (AT) and mammalian GFP-Reconstitution Across Synaptic Partners (mGRASP) can provide relatively high-throughput mapping synaptic connectivity at multiple scales. AT- and mGRASP-assisted circuit mapping (ATing and mGRASPing), combined with techniques such as retrograde virus, brain clearing techniques, and activity indicators will help unlock the secrets of complex neural circuits. Here, we discuss these useful new tools to enable mapping of brain circuits at multiple scales, some functional implications of spatial synaptic distribution, and future challenges and directions of these endeavors. PMID:26089781

  18. Application of fluorescence resonance energy transfer techniques to the study of lectin-binding site distribution on Paramecium primaurelia (Protista, Ciliophora) cell surface.

    PubMed

    Locatelli, D; Delmonte Corrado, M U; Politi, H; Bottiroli, G

    1998-01-01

    Fluorescence resonance energy transfer (FRET) is a photophysical phenomenon occurring between the molecules of two fluorochromes with suitable spectral characteristics (donor-acceptor dye pair), and consisting in an excitation energy migration through a non-radiative process. Since the efficiency of the process is strictly dependent on the distance and reciprocal orientation of the donor and acceptor molecules, FRET-based techniques can be successfully applied to the study of biomolecules and cell component organisation and distribution. These techniques have been employed in studying Paramecium primaurelia surface membrane for the reciprocal distribution of N-acetylneuraminic acid (NeuAc) and N-acetylglucosamine (GlcNAc) glycosidic residues, which were found to be involved in mating cell pairing. NeuAc and GlcNAc were detected by their specific binding lectins, Limulus polyphemus agglutinin (LPA) and wheat germ agglutinin (WGA), respectively. Microspectrofluorometric analysis afforded the choice of fluorescein isothiocyanate and Texas red conjugated with LPA and WGA, respectively, as a suitable donor-acceptor couple efficiently activating FRET processes. Studies performed both in solution and in cells allowed to define the experimental conditions favourable for a FRET analysis. The comparative study carried out both on the conjugating-region and the non conjugating region of the surface membrane, indicates that FRET distribution appears quite homogeneous in mating-competent mating type (mt) I, whereas, in mating-competent mt II cells, FRET distribution seems to be preferentially localised on the conjugating-region functionally involved in mating cell pairing. This difference in the distribution of lectin-binding sites is suggested to be related to mating-competence acquisition.

  19. Using Dual Regression to Investigate Network Shape and Amplitude in Functional Connectivity Analyses

    PubMed Central

    Nickerson, Lisa D.; Smith, Stephen M.; Öngür, Döst; Beckmann, Christian F.

    2017-01-01

    Independent Component Analysis (ICA) is one of the most popular techniques for the analysis of resting state FMRI data because it has several advantageous properties when compared with other techniques. Most notably, in contrast to a conventional seed-based correlation analysis, it is model-free and multivariate, thus switching the focus from evaluating the functional connectivity of single brain regions identified a priori to evaluating brain connectivity in terms of all brain resting state networks (RSNs) that simultaneously engage in oscillatory activity. Furthermore, typical seed-based analysis characterizes RSNs in terms of spatially distributed patterns of correlation (typically by means of simple Pearson's coefficients) and thereby confounds together amplitude information of oscillatory activity and noise. ICA and other regression techniques, on the other hand, retain magnitude information and therefore can be sensitive to both changes in the spatially distributed nature of correlations (differences in the spatial pattern or “shape”) as well as the amplitude of the network activity. Furthermore, motion can mimic amplitude effects so it is crucial to use a technique that retains such information to ensure that connectivity differences are accurately localized. In this work, we investigate the dual regression approach that is frequently applied with group ICA to assess group differences in resting state functional connectivity of brain networks. We show how ignoring amplitude effects and how excessive motion corrupts connectivity maps and results in spurious connectivity differences. We also show how to implement the dual regression to retain amplitude information and how to use dual regression outputs to identify potential motion effects. Two key findings are that using a technique that retains magnitude information, e.g., dual regression, and using strict motion criteria are crucial for controlling both network amplitude and motion-related amplitude effects, respectively, in resting state connectivity analyses. We illustrate these concepts using realistic simulated resting state FMRI data and in vivo data acquired in healthy subjects and patients with bipolar disorder and schizophrenia. PMID:28348512

  20. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  1. High level continuity for coordinate generation with precise controls

    NASA Technical Reports Server (NTRS)

    Eiseman, P. R.

    1982-01-01

    Coordinate generation techniques with precise local controls have been derived and analyzed for continuity requirements up to both the first and second derivatives, and have been projected to higher level continuity requirements from the established pattern. The desired local control precision was obtained when a family of coordinate surfaces could be uniformly distributed without a consequent creation of flat spots on the coordinate curves transverse to the family. Relative to the uniform distribution, the family could be redistributed from an a priori distribution function or from a solution adaptive approach, both without distortion from the underlying transformation which may be independently chosen to fit a nontrivial geometry and topology.

  2. Three Dimensional Imaging of the Nucleon

    NASA Astrophysics Data System (ADS)

    More, Jai; Mukherjee, Asmita; Nair, Sreeraj

    2018-05-01

    We study the Wigner distributions of quarks and gluons in light-front dressed quark model using the overlap of light front wave functions (LFWFs). We take the target to be a dressed quark, this is a composite spin -1/2 state of quark dressed with a gluon. This state allows us to calculate the quark and gluon Wigner distributions analytically in terms of LFWFs using Hamiltonian perturbation theory. We analyze numerically the Wigner distributions of quark and gluon and report their nature in the contour plots. We use an improved numerical technique to remove the cutoff dependence of the Fourier transformed integral over \\varvec{Δ}_\\perp.

  3. Vibrational energy distribution in aniline scattered from surfaces covered with organized organic monolayers

    NASA Astrophysics Data System (ADS)

    Paz, Y.; Naaman, R.

    1990-08-01

    Energy distribution in aniline molecules scattered from organized organic monolayers was investigated using a resonance-enhanced two-photon ionization technique. Two type of monolayers were used, one exposing a floppy unsubstituted aliphatic chain (OTS, n-octadecyltrichlorosilane), and the second having a perfluorinated tail (PFDA, perfluorodecanoic acid). The dependence of the internal and translational energy of the scattered aniline is monitored as a function of collision energy and surface properties. The data reveal an unusually high propensity for excitation of the NH 2 inversion mode in aniline. Vibrationally excited molecules are scattered with a narrower time-of-flight (TOF) distribution than those in the ground vibrational state.

  4. Category representations in the brain are both discretely localized and widely distributed.

    PubMed

    Shehzad, Zarrar; McCarthy, Gregory

    2018-06-01

    Whether category information is discretely localized or represented widely in the brain remains a contentious issue. Initial functional MRI studies supported the localizationist perspective that category information is represented in discrete brain regions. More recent fMRI studies using machine learning pattern classification techniques provide evidence for widespread distributed representations. However, these latter studies have not typically accounted for shared information. Here, we find strong support for distributed representations when brain regions are considered separately. However, localized representations are revealed by using analytical methods that separate unique from shared information among brain regions. The distributed nature of shared information and the localized nature of unique information suggest that brain connectivity may encourage spreading of information but category-specific computations are carried out in distinct domain-specific regions. NEW & NOTEWORTHY Whether visual category information is localized in unique domain-specific brain regions or distributed in many domain-general brain regions is hotly contested. We resolve this debate by using multivariate analyses to parse functional MRI signals from different brain regions into unique and shared variance. Our findings support elements of both models and show information is initially localized and then shared among other regions leading to distributed representations being observed.

  5. Polarization reconstruction algorithm for a Compton polarimeter

    NASA Astrophysics Data System (ADS)

    Vockert, M.; Weber, G.; Spillmann, U.; Krings, T.; Stöhlker, Th

    2018-05-01

    We present the technique of Compton polarimetry using X-ray detectors based on double-sided segmented semiconductor crystals that were developed within the SPARC collaboration. In addition, we discuss the polarization reconstruction algorithm with particular emphasis on systematic deviations between the observed detector response and our model function for the Compton scattering distribution inside the detector.

  6. Size-exclusion chromatography (HPLC-SEC) technique optimization by simplex method to estimate molecular weight distribution of agave fructans.

    PubMed

    Moreno-Vilet, Lorena; Bostyn, Stéphane; Flores-Montaño, Jose-Luis; Camacho-Ruiz, Rosa-María

    2017-12-15

    Agave fructans are increasingly important in food industry and nutrition sciences as a potential ingredient of functional food, thus practical analysis tools to characterize them are needed. In view of the importance of the molecular weight on the functional properties of agave fructans, this study has the purpose to optimize a method to determine their molecular weight distribution by HPLC-SEC for industrial application. The optimization was carried out using a simplex method. The optimum conditions obtained were at column temperature of 61.7°C using tri-distilled water without salt, adjusted pH of 5.4 and a flow rate of 0.36mL/min. The exclusion range is from 1 to 49 of polymerization degree (180-7966Da). This proposed method represents an accurate and fast alternative to standard methods involving multiple-detection or hydrolysis of fructans. The industrial applications of this technique might be for quality control, study of fractionation processes and determination of purity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Comparison of volatility function technique for risk-neutral densities estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  8. Elastin distribution in the normal uterus, uterine leiomyomas, adenomyosis and adenomyomas: a comparison.

    PubMed

    Zheng, Wei-Qiang; Ma, Rong; Zheng, Jian-Ming; Gong, Zhi-Jing

    2006-04-01

    To describe the histologic distribution of elastin in the nonpregnant human uterus, uterine leiomyomas, adenomyosis and adenomyomas. Uteri were obtained from women undergoing hysterectomy for benign conditions, including 26 cases of uterine leiomyomas, 24 cases of adenomyosis, 18 adenomyomas and 6 cases of autopsy specimens. Specific histochemical staining techniques were employed in order to demonstrate the distribution of elastin. The distribution of elastin components in the uterus was markedly uneven and showed a decreasing gradient from outer to inner myometrium. No elastin was present within leiomyomas, adenomyomas or adenomyosis. The distribution of elastin may help explain the normal function of the myometrium in labor. It implies that the uneven distribution of elastin components and absence of elastin within leiomyomas, adenomyomas and adenomyosis could be of some clinical significance. The altered elastin distribution in disease states may help explain such symptoms as dysmenorrhea in uterine endometriosis.

  9. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  10. The Unquiet State of Violent Relaxation

    NASA Astrophysics Data System (ADS)

    Henriksen, Richard

    2005-08-01

    In 1967 Lynden-Bell presented a statistical mechanical theory for the relaxation of collisionless systems. Since then this theory has been studied numerically and theoretically by many authors. Nakamura in 2000 gave an alternate theory that differed from that of Lynden- Bell by predicting a Gaussian equilibrium distribution function rather than Fermi-Dirac. More recently Henriksen in 2004 has used a coarsegraining technique on cosmological infall systems that also predicts a Gaussian equilibrium distribution function. These relaxed states are thought to occur from the centre of the system outwards. Simulations of cosmological cold dark-matter halos however persist in finding central density cusps (the NFWprofile), which are inconsistent with the predicted distribution functions and perhaps with the observations of some galaxies. Some numerical studies (e.g.Merrall & Henriksen 2003) that attempt to measure the distribution function of dark matter do find Gaussian functions, provided that the initial asymmetry is not too great. Moreover recent work at Queen's reported here by MacMillan, suggests that it is the growth of asymmetry during the infall that produces the cusped behaviour. So put briefly, the essential physics of dark-matter relaxation remains "obscure" as does the validity of the theoretical predictions. "Violent virialization" occurs rapidly, well before subscale relaxation, but the scale at which the relaxation stops (and why) remains unclear. I will present some results that argue for wave-particle relaxation (Landau damping as frequently suggested by Kandrup) and in addition I will suggest that the evolution of isolated systems is very different from that of systems constantly disturbed by infall. Isolated systems may become trapped in an unrelaxed state by the development or existence of multipolar internal structure. Nevertheless a suitable coarse graining of the system may restore the predicted distribution functions.

  11. A complete analytical solution of the Fokker-Planck and balance equations for nucleation and growth of crystals

    NASA Astrophysics Data System (ADS)

    Makoveeva, Eugenya V.; Alexandrov, Dmitri V.

    2018-01-01

    This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  12. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.

    PubMed

    Yokoyama, Jun'ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.

  13. Condition assessment of nonlinear processes

    DOEpatents

    Hively, Lee M.; Gailey, Paul C.; Protopopescu, Vladimir A.

    2002-01-01

    There is presented a reliable technique for measuring condition change in nonlinear data such as brain waves. The nonlinear data is filtered and discretized into windowed data sets. The system dynamics within each data set is represented by a sequence of connected phase-space points, and for each data set a distribution function is derived. New metrics are introduced that evaluate the distance between distribution functions. The metrics are properly renormalized to provide robust and sensitive relative measures of condition change. As an example, these measures can be used on EEG data, to provide timely discrimination between normal, preseizure, seizure, and post-seizure states in epileptic patients. Apparatus utilizing hardware or software to perform the method and provide an indicative output is also disclosed.

  14. ODF Maxima Extraction in Spherical Harmonic Representation via Analytical Search Space Reduction

    PubMed Central

    Aganj, Iman; Lenglet, Christophe; Sapiro, Guillermo

    2015-01-01

    By revealing complex fiber structure through the orientation distribution function (ODF), q-ball imaging has recently become a popular reconstruction technique in diffusion-weighted MRI. In this paper, we propose an analytical dimension reduction approach to ODF maxima extraction. We show that by expressing the ODF, or any antipodally symmetric spherical function, in the common fourth order real and symmetric spherical harmonic basis, the maxima of the two-dimensional ODF lie on an analytically derived one-dimensional space, from which we can detect the ODF maxima. This method reduces the computational complexity of the maxima detection, without compromising the accuracy. We demonstrate the performance of our technique on both artificial and human brain data. PMID:20879302

  15. Net Shaped Component Fabrication of Refractory Metal Alloys using Vacuum Plasma Spraying

    NASA Technical Reports Server (NTRS)

    Sen, S.; ODell, S.; Gorti, S.; Litchford, R.

    2006-01-01

    The vacuum plasma spraying (VPS) technique was employed to produce dense and net shaped components of a new tungsten-rhenium (W-Re) refractory metal alloy. The fine grain size obtained using this technique enhanced the mechanical properties of the alloy at elevated temperatures. The alloy development also included incorporation of thermodynamically stable dispersion phases to pin down grain boundaries at elevated temperatures and thereby circumventing the inherent problem of recrystallization of refractory alloys at elevated temperatures. Requirements for such alloys as related to high temperature space propulsion components will be discussed. Grain size distribution as a function of cooling rate and dispersion phase loading will be presented. Mechanical testing and grain growth results as a function of temperature will also be discussed.

  16. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    DOE PAGES

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less

  17. Measurement of tracer gas distributions using an open-path FTIR system coupled with computed tomography

    NASA Astrophysics Data System (ADS)

    Drescher, Anushka C.; Yost, Michael G.; Park, Doo Y.; Levine, Steven P.; Gadgil, Ashok J.; Fischer, Marc L.; Nazaroff, William W.

    1995-05-01

    Optical remote sensing and iterative computed tomography (CT) can be combined to measure the spatial distribution of gaseous pollutant concentrations in a plane. We have conducted chamber experiments to test this combination of techniques using an Open Path Fourier Transform Infrared Spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). ART was found to converge to solutions that showed excellent agreement with the ray integral concentrations measured by the FTIR but were inconsistent with simultaneously gathered point sample concentration measurements. A new CT method was developed based on (a) the superposition of bivariate Gaussians to model the concentration distribution and (b) a simulated annealing minimization routine to find the parameters of the Gaussians that resulted in the best fit to the ray integral concentration data. This new method, named smooth basis function minimization (SBFM) generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present one set of illustrative experimental data to compare the performance of ART and SBFM.

  18. Evidence of three-body correlation functions in Rb+ and Sr2+ acetonitrile solutions

    NASA Astrophysics Data System (ADS)

    D'Angelo, P.; Pavel, N. V.

    1999-09-01

    The local structure of Sr2+ and Rb+ ions in acetonitrile has been investigated by x-ray absorption spectroscopy (XAS) and molecular dynamics simulations. The extended x-ray absorption fine structure above the Sr and Rb K edges has been interpreted in the framework of multiple scattering (MS) formalism and, for the first time, clear evidence of MS contributions has been found in noncomplexing ion solutions. Molecular dynamics has been used to generate the partial pair and triangular distribution functions from which model χ(k) signals have been constructed. The Sr2+ and Rb+ acetonitrile pair distribution functions show very sharp and well-defined first peaks indicating the presence of a well organized first solvation shell. Most of the linear acetonitrile molecules have been found to be distributed like hedgehog spines around the Sr2+ and Rb+ ions. The presence of three-body correlations has been singled out by the existence of well-defined peaks in the triangular configurations. Excellent agreement has been found between the theoretical and experimental data enforcing the reliability of the interatomic potentials used in the simulations. These results demonstrate the ability of the XAS technique in probing the higher-order correlation functions in solution.

  19. A mathematical deconvolution formulation for superficial dose distribution measurement by Cerenkov light dosimetry.

    PubMed

    Brost, Eric Edward; Watanabe, Yoichi

    2018-06-01

    Cerenkov photons are created by high-energy radiation beams used for radiation therapy. In this study, we developed a Cerenkov light dosimetry technique to obtain a two-dimensional dose distribution in a superficial region of medium from the images of Cerenkov photons by using a deconvolution method. An integral equation was derived to represent the Cerenkov photon image acquired by a camera for a given incident high-energy photon beam by using convolution kernels. Subsequently, an equation relating the planar dose at a depth to a Cerenkov photon image using the well-known relationship between the incident beam fluence and the dose distribution in a medium was obtained. The final equation contained a convolution kernel called the Cerenkov dose scatter function (CDSF). The CDSF function was obtained by deconvolving the Cerenkov scatter function (CSF) with the dose scatter function (DSF). The GAMOS (Geant4-based Architecture for Medicine-Oriented Simulations) Monte Carlo particle simulation software was used to obtain the CSF and DSF. The dose distribution was calculated from the Cerenkov photon intensity data using an iterative deconvolution method with the CDSF. The theoretical formulation was experimentally evaluated by using an optical phantom irradiated by high-energy photon beams. The intensity of the deconvolved Cerenkov photon image showed linear dependence on the dose rate and the photon beam energy. The relative intensity showed a field size dependence similar to the beam output factor. Deconvolved Cerenkov images showed improvement in dose profiles compared with the raw image data. In particular, the deconvolution significantly improved the agreement in the high dose gradient region, such as in the penumbra. Deconvolution with a single iteration was found to provide the most accurate solution of the dose. Two-dimensional dose distributions of the deconvolved Cerenkov images agreed well with the reference distributions for both square fields and a multileaf collimator (MLC) defined, irregularly shaped field. The proposed technique improved the accuracy of the Cerenkov photon dosimetry in the penumbra region. The results of this study showed initial validation of the deconvolution method for beam profile measurements in a homogeneous media. The new formulation accounted for the physical processes of Cerenkov photon transport in the medium more accurately than previously published methods. © 2018 American Association of Physicists in Medicine.

  20. HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain

    PubMed Central

    Huppert, Theodore J.; Diamond, Solomon G.; Franceschini, Maria A.; Boas, David A.

    2009-01-01

    Near-infrared spectroscopy (NIRS) is a noninvasive neuroimaging tool for studying evoked hemodynamic changes within the brain. By this technique, changes in the optical absorption of light are recorded over time and are used to estimate the functionally evoked changes in cerebral oxyhemoglobin and deoxyhemoglobin concentrations that result from local cerebral vascular and oxygen metabolic effects during brain activity. Over the past three decades this technology has continued to grow, and today NIRS studies have found many niche applications in the fields of psychology, physiology, and cerebral pathology. The growing popularity of this technique is in part associated with a lower cost and increased portability of NIRS equipment when compared with other imaging modalities, such as functional magnetic resonance imaging and positron emission tomography. With this increasing number of applications, new techniques for the processing, analysis, and interpretation of NIRS data are continually being developed. We review some of the time-series and functional analysis techniques that are currently used in NIRS studies, we describe the practical implementation of various signal processing techniques for removing physiological, instrumental, and motion-artifact noise from optical data, and we discuss the unique aspects of NIRS analysis in comparison with other brain imaging modalities. These methods are described within the context of the MATLAB-based graphical user interface program, HomER, which we have developed and distributed to facilitate the processing of optical functional brain data. PMID:19340120

  1. Effect of posttranslational modifications on enzyme function and assembly.

    PubMed

    Ryšlavá, Helena; Doubnerová, Veronika; Kavan, Daniel; Vaněk, Ondřej

    2013-10-30

    The detailed examination of enzyme molecules by mass spectrometry and other techniques continues to identify hundreds of distinct PTMs. Recently, global analyses of enzymes using methods of contemporary proteomics revealed widespread distribution of PTMs on many key enzymes distributed in all cellular compartments. Critically, patterns of multiple enzymatic and nonenzymatic PTMs within a single enzyme are now functionally evaluated providing a holistic picture of a macromolecule interacting with low molecular mass compounds, some of them being substrates, enzyme regulators, or activated precursors for enzymatic and nonenzymatic PTMs. Multiple PTMs within a single enzyme molecule and their mutual interplays are critical for the regulation of catalytic activity. Full understanding of this regulation will require detailed structural investigation of enzymes, their structural analogs, and their complexes. Further, proteomics is now integrated with molecular genetics, transcriptomics, and other areas leading to systems biology strategies. These allow the functional interrogation of complex enzymatic networks in their natural environment. In the future, one might envisage the use of robust high throughput analytical techniques that will be able to detect multiple PTMs on a global scale of individual proteomes from a number of carefully selected cells and cellular compartments. This article is part of a Special Issue entitled: Posttranslational Protein modifications in biology and Medicine. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Calculation of broadband time histories of ground motion: Comparison of methods and validation using strong-ground motion from the 1994 Northridge earthquake

    USGS Publications Warehouse

    Hartzell, S.; Harmsen, S.; Frankel, A.; Larsen, S.

    1999-01-01

    This article compares techniques for calculating broadband time histories of ground motion in the near field of a finite fault by comparing synthetics with the strong-motion data set for the 1994 Northridge earthquake. Based on this comparison, a preferred methodology is presented. Ground-motion-simulation techniques are divided into two general methods: kinematic- and composite-fault models. Green's functions of three types are evaluated: stochastic, empirical, and theoretical. A hybrid scheme is found to give the best fit to the Northridge data. Low frequencies ( 1 Hz) are calculated using a composite-fault model with a fractal subevent size distribution and stochastic, bandlimited, white-noise Green's functions. At frequencies below 1 Hz, theoretical elastic-wave-propagation synthetics introduce proper seismic-phase arrivals of body waves and surface waves. The 3D velocity structure more accurately reproduces record durations for the deep sedimentary basin structures found in the Los Angeles region. At frequencies above 1 Hz, scattering effects become important and wave propagation is more accurately represented by stochastic Green's functions. A fractal subevent size distribution for the composite fault model ensures an ??-2 spectral shape over the entire frequency band considered (0.1-20 Hz).

  3. Bio-inspired direct patterning functional nanothin microlines: controllable liquid transfer.

    PubMed

    Wang, Qianbin; Meng, Qingan; Wang, Pengwei; Liu, Huan; Jiang, Lei

    2015-04-28

    Developing a general and low-cost strategy that enables direct patterning of microlines with nanometer thickness from versatile liquid-phase functional materials and precise positioning of them on various substrates remains a challenge. Herein, with inspiration from the oriental wisdom to control ink transfer by Chinese brushes, we developed a facile and general writing strategy to directly pattern various functional microlines with homogeneous distribution and nanometer-scale thickness. It is demonstrated that the width and thickness of the microlines could be well-controlled by tuning the writing method, providing guidance for the adaptation of this technique to various systems. It is also shown that various functional liquid-phase materials, such as quantum dots, small molecules, polymers, and suspensions of nanoparticles, could directly write on the substrates with intrinsic physicochemical properties well-preserved. Moreover, this technique enabled direct patterning of liquid-phase materials on certain microdomains, even in multiple layered style, thus a microdomain localized chemical reaction and the patterned surface chemical modification were enabled. This bio-inspired direct writing device will shed light on the template-free printing of various functional micropatterns, as well as the integrated functional microdevices.

  4. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  5. A Langevin approach to multi-scale modeling

    DOE PAGES

    Hirvijoki, Eero

    2018-04-13

    In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this paper, we propose a multi-scale method which allowsmore » us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. Finally, this allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.« less

  6. A Langevin approach to multi-scale modeling

    NASA Astrophysics Data System (ADS)

    Hirvijoki, Eero

    2018-04-01

    In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this letter, we propose a multi-scale method which allows us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. This allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.

  7. A Langevin approach to multi-scale modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirvijoki, Eero

    In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this paper, we propose a multi-scale method which allowsmore » us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. Finally, this allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.« less

  8. Resting-state blood oxygen level-dependent functional magnetic resonance imaging for presurgical planning.

    PubMed

    Kamran, Mudassar; Hacker, Carl D; Allen, Monica G; Mitchell, Timothy J; Leuthardt, Eric C; Snyder, Abraham Z; Shimony, Joshua S

    2014-11-01

    Resting-state functional MR imaging (rsfMR imaging) measures spontaneous fluctuations in the blood oxygen level-dependent (BOLD) signal and can be used to elucidate the brain's functional organization. It is used to simultaneously assess multiple distributed resting-state networks. Unlike task-based functional MR imaging, rsfMR imaging does not require task performance. This article presents a brief introduction of rsfMR imaging processing methods followed by a detailed discussion on the use of rsfMR imaging in presurgical planning. Example cases are provided to highlight the strengths and limitations of the technique. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Concurrent application of TMS and near-infrared optical imaging: methodological considerations and potential artifacts

    PubMed Central

    Parks, Nathan A.

    2013-01-01

    The simultaneous application of transcranial magnetic stimulation (TMS) with non-invasive neuroimaging provides a powerful method for investigating functional connectivity in the human brain and the causal relationships between areas in distributed brain networks. TMS has been combined with numerous neuroimaging techniques including, electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and positron emission tomography (PET). Recent work has also demonstrated the feasibility and utility of combining TMS with non-invasive near-infrared optical imaging techniques, functional near-infrared spectroscopy (fNIRS) and the event-related optical signal (EROS). Simultaneous TMS and optical imaging affords a number of advantages over other neuroimaging methods but also involves a unique set of methodological challenges and considerations. This paper describes the methodology of concurrently performing optical imaging during the administration of TMS, focusing on experimental design, potential artifacts, and approaches to controlling for these artifacts. PMID:24065911

  10. H + O3 Fourier-transform infrared emission and laser absorption studies of OH(X2Pi) radical - An experimental dipole moment function and state-to-state Einstein A coefficients

    NASA Technical Reports Server (NTRS)

    Nelson, David D., Jr.; Schiffman, Aram; Nesbitt, David J.; Orlando, John J.; Burkholder, James B.

    1990-01-01

    FTIR emission/absorption spectroscopy is used to measure the relative intensities of 88 pairs of rovibrational transitions of OH(X2Pi) distributed over 16 vibrational bands. The experimental technique used to obtain the Einstein A ratios is discussed. The dipole moment function which follows from the intensity ratios along with Einstein A coefficients calculated from mu(r) is presented.

  11. Ride quality flight testing

    NASA Technical Reports Server (NTRS)

    Swaim, R. L.

    1978-01-01

    The ride quality experienced by passengers is a function of airframe rigid-body, elastic dynamic responses, autopilot, and stability augmentation system control inputs. A frequency response method has been developed to select sinusoidal elevator input time histories yielding vertical load factor distributions, within a given limit, as a function of fuselage station. The numerical technique is illustrated by applying two-degree-of-freedom short-period and first symmetric mode equations of motion to a B-1 aircraft at Mach 0.85 during sea level flight conditions.

  12. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  13. A limiting analysis for edge effects in angle-ply laminates

    NASA Technical Reports Server (NTRS)

    Hsu, P. W.; Herakovich, C. T.

    1976-01-01

    A zeroth order solution for edge effects in angle ply composite laminates using perturbation techniques and a limiting free body approach was developed. The general method of solution for laminates is developed and then applied to the special case of a graphite/epoxy laminate. Interlaminar stress distributions are obtained as a function of the laminate thickness to width ratio h/b and compared to existing numerical results. The solution predicts stable, continuous stress distributions, determines finite maximum tensile interlaminar normal stress for two laminates, and provides mathematical evidence for singular interlaminar shear stresses.

  14. Quantum computation and analysis of Wigner and Husimi functions: toward a quantum image treatment.

    PubMed

    Terraneo, M; Georgeot, B; Shepelyansky, D L

    2005-06-01

    We study the efficiency of quantum algorithms which aim at obtaining phase-space distribution functions of quantum systems. Wigner and Husimi functions are considered. Different quantum algorithms are envisioned to build these functions, and compared with the classical computation. Different procedures to extract more efficiently information from the final wave function of these algorithms are studied, including coarse-grained measurements, amplitude amplification, and measure of wavelet-transformed wave function. The algorithms are analyzed and numerically tested on a complex quantum system showing different behavior depending on parameters: namely, the kicked rotator. The results for the Wigner function show in particular that the use of the quantum wavelet transform gives a polynomial gain over classical computation. For the Husimi distribution, the gain is much larger than for the Wigner function and is larger with the help of amplitude amplification and wavelet transforms. We discuss the generalization of these results to the simulation of other quantum systems. We also apply the same set of techniques to the analysis of real images. The results show that the use of the quantum wavelet transform allows one to lower dramatically the number of measurements needed, but at the cost of a large loss of information.

  15. The use of positron emission tomography in pion radiotherapy.

    PubMed

    Goodman, G B; Lam, G K; Harrison, R W; Bergstrom, M; Martin, W R; Pate, B D

    1986-10-01

    The radioactive debris produced by pion radiotherapy can be imaged by the technique of Positron Emission Tomography (PET) as a method of non-invasive in situ verification of the pion treatment. This paper presents the first visualization of the pion stopping distribution within a tumor in a human brain using PET. Together with the tissue functional information provided by the standard PET scans using radiopharmaceuticals, the combination of pion with PET technique can provide a much better form of radiotherapy than the use of conventional radiation in both treatment planning and verification.

  16. Some interesting aspects of physisorption stay-time measurements obtained using molecular-beam techniques. [on Ni surface

    NASA Technical Reports Server (NTRS)

    Wilmoth, R. G.; Fisher, S. S.

    1974-01-01

    Stay-time distributions have been obtained for Xe physisorbing on polycrystalline nickel as a function of the target temperature using a pulsed molecular-beam technique. Some interesting effects due to ion bombardment of the surface using He, Ar, and Xe ions are presented. Measured detector signal shapes are found to deviate from those predicted for first-order desorption with velocities corresponding to Maxwellian effusion at the surface temperature. Evidence is found for interaction between beam pulse adsorption and steady-state adsorption of beam species background atoms.

  17. Stabilization of Polar Nanoregions in Pb-free Ferroelectrics

    DOE PAGES

    Pramanick, A.; Dmowski, Wojciech; Egami, Takeshi; ...

    2018-05-18

    In this study, the formation of polar nanoregions through solid-solution additions is known to enhance significantly the functional properties of ferroelectric materials. Despite considerable progress in characterizing the microscopic behavior of polar nanoregions (PNR), understanding their real-space atomic structure and dynamics of their formation remains a considerable challenge. Here, using the method of dynamic pair distribution function, we provide direct insights into the role of solid-solution additions towards the stabilization of polar nanoregions in the Pb-free ferroelectric of Ba(Zr,Ti)O 3. It is shown that for an optimum level of substitution of Ti by larger Zr ions, the dynamics of atomicmore » displacements for ferroelectric polarization are slowed sufficiently below THz frequencies, which leads to increased local correlation among dipoles within PNRs. The dynamic pair distribution function technique demonstrates a unique capability to obtain insights into locally correlated atomic dynamics in disordered materials, including new Pb-free ferroelectrics, which is necessary to understand and control their functional properties.« less

  18. Stabilization of Polar Nanoregions in Pb-free Ferroelectrics

    NASA Astrophysics Data System (ADS)

    Pramanick, A.; Dmowski, W.; Egami, T.; Budisuharto, A. Setiadi; Weyland, F.; Novak, N.; Christianson, A. D.; Borreguero, J. M.; Abernathy, D. L.; Jørgensen, M. R. V.

    2018-05-01

    The formation of polar nanoregions through solid-solution additions is known to enhance significantly the functional properties of ferroelectric materials. Despite considerable progress in characterizing the microscopic behavior of polar nanoregions (PNR), understanding their real-space atomic structure and dynamics of their formation remains a considerable challenge. Here, using the method of dynamic pair distribution function, we provide direct insights into the role of solid-solution additions towards the stabilization of polar nanoregions in the Pb-free ferroelectric of Ba (Zr ,Ti )O3 . It is shown that for an optimum level of substitution of Ti by larger Zr ions, the dynamics of atomic displacements for ferroelectric polarization are slowed sufficiently below THz frequencies, which leads to increased local correlation among dipoles within PNRs. The dynamic pair distribution function technique demonstrates a unique capability to obtain insights into locally correlated atomic dynamics in disordered materials, including new Pb-free ferroelectrics, which is necessary to understand and control their functional properties.

  19. Stabilization of Polar Nanoregions in Pb-free Ferroelectrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pramanick, A.; Dmowski, Wojciech; Egami, Takeshi

    In this study, the formation of polar nanoregions through solid-solution additions is known to enhance significantly the functional properties of ferroelectric materials. Despite considerable progress in characterizing the microscopic behavior of polar nanoregions (PNR), understanding their real-space atomic structure and dynamics of their formation remains a considerable challenge. Here, using the method of dynamic pair distribution function, we provide direct insights into the role of solid-solution additions towards the stabilization of polar nanoregions in the Pb-free ferroelectric of Ba(Zr,Ti)O 3. It is shown that for an optimum level of substitution of Ti by larger Zr ions, the dynamics of atomicmore » displacements for ferroelectric polarization are slowed sufficiently below THz frequencies, which leads to increased local correlation among dipoles within PNRs. The dynamic pair distribution function technique demonstrates a unique capability to obtain insights into locally correlated atomic dynamics in disordered materials, including new Pb-free ferroelectrics, which is necessary to understand and control their functional properties.« less

  20. Structural and spectroscopic characterization, reactivity study and charge transfer analysis of the newly synthetized 2-(6-hydroxy-1-benzofuran-3-yl) acetic acid

    NASA Astrophysics Data System (ADS)

    Murthy, P. Krishna; Krishnaswamy, G.; Armaković, Stevan; Armaković, Sanja J.; Suchetan, P. A.; Desai, Nivedita R.; Suneetha, V.; SreenivasaRao, R.; Bhargavi, G.; Aruna Kumar, D. B.

    2018-06-01

    The title compound 2-(6-hydroxy-1-benzofuran-3-yl) acetic acid (abbreviated as HBFAA) has been synthetized and characterized by FT-IR, FT-Raman and NMR spectroscopic techniques. Solid state crystal structure of HBFAA has been determined by single crystal X-ray diffraction technique. The crystal structure features O-H⋯O and C-H⋯O intermolecular interactions resulting in a two dimensional supramolecular architecture. The presence of various intermolecular interactions is well supported by the Hirshfeld surface analysis. The molecular properties of HBFAA were performed by Density functional theory (DFT) using B3LYP/6-311G++(d,p) method at ground state in gas phase, compile these results with experimental values and shows mutual agreement. The vibrational spectral analysis were carried out using FT-IR and FT-Raman spectroscopic techniques and assignment of each vibrational wavenumber made on the basis of potential energy distribution (PED). And also frontier orbital analysis (FMOs), global reactivity descriptors, non-linear optical properties (NLO) and natural bond orbital analysis (NBO) of HBFAA were computed with same method. Efforts were made in order to understand global and local reactivity properties of title compound by calculations of MEP, ALIE, BDE and Fukui function surfaces in gas phase, together with thermodynamic properties. Molecular dynamics simulation and radial distribution functions were also used in order to understand the influence of water to the stability of title compound. Charge transfer between molecules of HBFAA has been investigated thanks to the combination of MD simulations and DFT calculations.

  1. Consistent second-order boundary implementations for convection-diffusion lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chew, Jia Wei

    2018-02-01

    In this study, an alternative second-order boundary scheme is proposed under the framework of the convection-diffusion lattice Boltzmann (LB) method for both straight and curved geometries. With the proposed scheme, boundary implementations are developed for the Dirichlet, Neumann and linear Robin conditions in a consistent way. The Chapman-Enskog analysis and the Hermite polynomial expansion technique are first applied to derive the explicit expression for the general distribution function with second-order accuracy. Then, the macroscopic variables involved in the expression for the distribution function is determined by the prescribed macroscopic constraints and the known distribution functions after streaming [see the paragraph after Eq. (29) for the discussions of the "streaming step" in LB method]. After that, the unknown distribution functions are obtained from the derived macroscopic information at the boundary nodes. For straight boundaries, boundary nodes are directly placed at the physical boundary surface, and the present scheme is applied directly. When extending the present scheme to curved geometries, a local curvilinear coordinate system and first-order Taylor expansion are introduced to relate the macroscopic variables at the boundary nodes to the physical constraints at the curved boundary surface. In essence, the unknown distribution functions at the boundary node are derived from the known distribution functions at the same node in accordance with the macroscopic boundary conditions at the surface. Therefore, the advantages of the present boundary implementations are (i) the locality, i.e., no information from neighboring fluid nodes is required; (ii) the consistency, i.e., the physical boundary constraints are directly applied when determining the macroscopic variables at the boundary nodes, thus the three kinds of conditions are realized in a consistent way. It should be noted that the present focus is on two-dimensional cases, and theoretical derivations as well as the numerical validations are performed in the framework of the two-dimensional five-velocity lattice model.

  2. Advanced statistical methods for improved data analysis of NASA astrophysics missions

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.

    1992-01-01

    The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.

  3. On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland W.

    1992-01-01

    The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.

  4. Application of Weibull analysis to SSME hardware

    NASA Technical Reports Server (NTRS)

    Gray, L. A. B.

    1986-01-01

    Generally, it has been documented that the wearing of engine parts forms a failure distribution which can be approximated by a function developed by Weibull. The purpose here is to examine to what extent the Weibull distribution approximates failure data for designated engine parts of the Space Shuttle Main Engine (SSME). The current testing certification requirements will be examined in order to establish confidence levels. An examination of the failure history of SSME parts/assemblies (turbine blades, main combustion chamber, or high pressure fuel pump first stage impellers) which are limited in usage by time or starts will be done by using updated Weibull techniques. Efforts will be made by the investigator to predict failure trends by using Weibull techniques for SSME parts (turbine temperature sensors, chamber pressure transducers, actuators, and controllers) which are not severely limited by time or starts.

  5. Institute for Science and Engineering Simulation (ISES)

    DTIC Science & Technology

    2015-12-18

    performance and other functionalities such as electrical , magnetic, optical, thermal, biological, chemical, and so forth. Structural integrity...transmission electron microscopy (HRSTEM) and three-dimensional atom probe (3DAP) tomography , the true atomic scale structure and change in chemical...atom probe tomography (3DAP) techniques, has permitted characterizing and quantifying the multimodal size distribution of different generations of γ

  6. Hertzian Dipole Radiation over Isotropic Magnetodielectric Substrates

    DTIC Science & Technology

    2015-03-01

    Analytical and numerical techniques in the Green’s function treatment of microstrip antennas and scatterers. IEE Proceedings. March 1983:130(2). 3...public release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report investigates dipole antennas printed on grounded...engineering of thin planar antennas . Since these materials often require complicated constitutive equations to describe their properties rigorously, the

  7. Glyph-based analysis of multimodal directional distributions in vector field ensembles

    NASA Astrophysics Data System (ADS)

    Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger

    2015-04-01

    Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.

  8. [Spatial distribution pattern of Pontania dolichura larvae and sampling technique].

    PubMed

    Zhang, Feng; Chen, Zhijie; Zhang, Shulian; Zhao, Huiyan

    2006-03-01

    In this paper, the spatial distribution pattern of Pontania dolichura larvae was analyzed with Taylor's power law, Iwao's distribution function, and six aggregation indexes. The results showed that the spatial distribution pattern of P. dolichura larvae was of aggregated, and the basic component of the distribution was individual colony, with the aggregation intensity increased with density. On branches, the aggregation was caused by the adult behavior of laying eggs and the spatial position of leaves, while on leaves, the aggregation was caused by the spatial position of news leaves in spring when m < 2.37, and by the spatial position of news leaves in spring and the behavior of eclosion and laying eggs when m > 2.37. By using the parameters alpha and beta in Iwao's m * -m regression equation, the optimal and sequential sampling numbers were determined.

  9. Photon event distribution sampling: an image formation technique for scanning microscopes that permits tracking of sub-diffraction particles with high spatial and temporal resolutions.

    PubMed

    Larkin, J D; Publicover, N G; Sutko, J L

    2011-01-01

    In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  10. Laser induced fluorescence technique for detecting organic matter in East China Sea

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Wang, Tianyu; Pan, Delu; Huang, Haiqing

    2017-10-01

    A laser induced fluorescence (LIF) technique for fast diagnosing chromophoric dissolved organic matter (CDOM) in water is discussed. We have developed a new field-portable laser fluorometer for rapid fluorescence measurements. In addtion, the fluorescence spectral characteristics of fluorescent constituents (e.g., CDOM, chlorophyll-a) were analyzed with a spectral deconvolution method of bi-Gaussian peak function. In situ measurements by the LIF technique compared well with values measured by conventional spectrophotometer method in laboratory. A significant correlation (R2 = 0.93) was observed between fluorescence by the technique and absorption by laboratory spectrophotometer. Influence of temperature variation on LIF measurement was investigated in lab and a temperature coefficient was deduced for fluorescence correction. Distributions of CDOM fluorescence measured using this technique in the East China Sea coast were presented. The in situ result demonstrated the utility of the LIF technique for rapid detecting dissolved organic matter.

  11. Spatial frequency performance limitations of radiation dose optimization and beam positioning

    NASA Astrophysics Data System (ADS)

    Stewart, James M. P.; Stapleton, Shawn; Chaudary, Naz; Lindsay, Patricia E.; Jaffray, David A.

    2018-06-01

    The flexibility and sophistication of modern radiotherapy treatment planning and delivery methods have advanced techniques to improve the therapeutic ratio. Contemporary dose optimization and calculation algorithms facilitate radiotherapy plans which closely conform the three-dimensional dose distribution to the target, with beam shaping devices and image guided field targeting ensuring the fidelity and accuracy of treatment delivery. Ultimately, dose distribution conformity is limited by the maximum deliverable dose gradient; shallow dose gradients challenge techniques to deliver a tumoricidal radiation dose while minimizing dose to surrounding tissue. In this work, this ‘dose delivery resolution’ observation is rigorously formalized for a general dose delivery model based on the superposition of dose kernel primitives. It is proven that the spatial resolution of a delivered dose is bounded by the spatial frequency content of the underlying dose kernel, which in turn defines a lower bound in the minimization of a dose optimization objective function. In addition, it is shown that this optimization is penalized by a dose deposition strategy which enforces a constant relative phase (or constant spacing) between individual radiation beams. These results are further refined to provide a direct, analytic method to estimate the dose distribution arising from the minimization of such an optimization function. The efficacy of the overall framework is demonstrated on an image guided small animal microirradiator for a set of two-dimensional hypoxia guided dose prescriptions.

  12. Modeling a space-based quantum link that includes an adaptive optics system

    NASA Astrophysics Data System (ADS)

    Duchane, Alexander W.; Hodson, Douglas D.; Mailloux, Logan O.

    2017-10-01

    Quantum Key Distribution uses optical pulses to generate shared random bit strings between two locations. If a high percentage of the optical pulses are comprised of single photons, then the statistical nature of light and information theory can be used to generate secure shared random bit strings which can then be converted to keys for encryption systems. When these keys are incorporated along with symmetric encryption techniques such as a one-time pad, then this method of key generation and encryption is resistant to future advances in quantum computing which will significantly degrade the effectiveness of current asymmetric key sharing techniques. This research first reviews the transition of Quantum Key Distribution free-space experiments from the laboratory environment to field experiments, and finally, ongoing space experiments. Next, a propagation model for an optical pulse from low-earth orbit to ground and the effects of turbulence on the transmitted optical pulse is described. An Adaptive Optics system is modeled to correct for the aberrations caused by the atmosphere. The long-term point spread function of the completed low-earth orbit to ground optical system is explored in the results section. Finally, the impact of this optical system and its point spread function on an overall quantum key distribution system as well as the future work necessary to show this impact is described.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeure, I.M.

    The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less

  14. A 3D tomographic reconstruction method to analyze Jupiter's electron-belt emission observations

    NASA Astrophysics Data System (ADS)

    Santos-Costa, Daniel; Girard, Julien; Tasse, Cyril; Zarka, Philippe; Kita, Hajime; Tsuchiya, Fuminori; Misawa, Hiroaki; Clark, George; Bagenal, Fran; Imai, Masafumi; Becker, Heidi N.; Janssen, Michael A.; Bolton, Scott J.; Levin, Steve M.; Connerney, John E. P.

    2017-04-01

    Multi-dimensional reconstruction techniques of Jupiter's synchrotron radiation from radio-interferometric observations were first developed by Sault et al. [Astron. Astrophys., 324, 1190-1196, 1997]. The tomographic-like technique introduced 20 years ago had permitted the first 3-dimensional mapping of the brightness distribution around the planet. This technique has demonstrated the advantage to be weakly dependent on planetary field models. It also does not require any knowledge on the energy and spatial distributions of the radiating electrons. On the downside, it is assumed that the volume emissivity of any punctual point source around the planet is isotropic. This assumption becomes incorrect when mapping the brightness distribution for non-equatorial point sources or any point sources from Juno's perspective. In this paper, we present our modeling effort to bypass the isotropy issue. Our approach is to use radio-interferometric observations and determine the 3-D brightness distribution in a cylindrical coordinate system. For each set (z, r), we constrain the longitudinal distribution with a Fourier series and the anisotropy is addressed with a simple periodic function when possible. We develop this new method over a wide range of frequencies using past VLA and LOFAR observations of Jupiter. We plan to test this reconstruction method with observations of Jupiter that are currently being carried out with LOFAR and GMRT in support to the Juno mission. We describe how this new 3D tomographic reconstruction method provides new model constraints on the energy and spatial distributions of Jupiter's ultra-relativistic electrons close to the planet and be used to interpret Juno MWR observations of Jupiter's electron-belt emission and assist in evaluating the background noise from the radiation environment in the atmospheric measurements.

  15. Study of residual stresses in CT test specimens welded by electron beam

    NASA Astrophysics Data System (ADS)

    Papushkin, I. V.; Kaisheva, D.; Bokuchava, G. D.; Angelov, V.; Petrov, P.

    2018-03-01

    The paper reports result of residual stress distribution studies in CT specimens reconstituted by electron beam welding (EBW). The main aim of the study is evaluation of the applicability of the welding technique for CT specimens’ reconstitution. Thus, the temperature distribution during electron beam welding of a CT specimen was calculated using Green’s functions and the residual stress distribution was determined experimentally using neutron diffraction. Time-of-flight neutron diffraction experiments were performed on a Fourier stress diffractometer at the IBR-2 fast pulsed reactor in FLNP JINR (Dubna, Russia). The neutron diffraction data estimates yielded a maximal stress level of ±180 MPa in the welded joint.

  16. Models of GexSe1-x

    NASA Astrophysics Data System (ADS)

    Malouin, Marc-André.; Mousseau, Normand

    2008-03-01

    We present numerical models of chalcogenide glasses constructed using the effective two and three body interaction potential developed by Mauro and Varshneya [1] combined with the activation-relaxation technique (ART nouveau) [2]. Structures are prepared starting from a random distribution, avoiding biases and crystalline remnants. Structural properties are studied mainly via characteristic system measurements including partial and total radial distribution functions, bond angle distributions, mean coordinations and bonds population. Results are shown for GexSe1-x for various x concentrations and compared to both experimental measurements and ab initio simulation results. [1] J.C. Mauro and A.K. Varshneya, J. Am. Ceram. Soc., 89 [7] 2323-6 (2006). [2] R. Malek and N. Mousseau, Phys. Rev. E 62, 7723 (2000).

  17. Two-Photon Scanning Photochemical Microscopy: Mapping Ligand-Gated Ion Channel Distributions

    NASA Astrophysics Data System (ADS)

    Denk, Winfried

    1994-07-01

    The locations and densities of ionotropic membrane receptors, which are responsible for receiving synaptic transmission throughout the nervous system, are of prime importance in understanding the function of neural circuits. It is shown that the highly localized liberation of "caged" neurotransmitters by two-photon absorption-mediated photoactivation can be used in conjunction with recording the induced whole-cell current to determine the distribution of ligand-gated ion channels. The technique is potentially sensitive enough to detect individual channels with diffraction-limited spatial resolution. Images of the distribution of nicotinic acetylcholine receptors on cultured BC3H1 cells were obtained using a photoactivatable precursor of the nicotinic agonist carbamoylcholine.

  18. Element enrichment factor calculation using grain-size distribution and functional data regression.

    PubMed

    Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R

    2015-01-01

    In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Calibration of a polarimetric imaging SAR

    NASA Technical Reports Server (NTRS)

    Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.

    1991-01-01

    Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raymund, T.D.

    Recently, several tomographic techniques for ionospheric electron density imaging have been proposed. These techniques reconstruct a vertical slice image of electron density using total electron content data. The data are measured between a low orbit beacon satellite and fixed receivers located along the projected orbital path of the satellite. By using such tomographic techniques, it may be possible to inexpensively (relative to incoherent scatter techniques) image the ionospheric electron density in a vertical plane several times per day. The satellite and receiver geometry used to measure the total electron content data causes the data to be incomplete; that is, themore » measured data do not contain enough information to completely specify the ionospheric electron density distribution in the region between the satellite and the receivers. A new algorithm is proposed which allows the incorporation of other complementary measurements, such as those from ionosondes, and also includes ways to include a priori information about the unknown electron density distribution in the reconstruction process. The algorithm makes use of two-dimensional basis functions. Illustrative application of this algorithm is made to simulated cases with good results. The technique is also applied to real total electron content (TEC) records collected in Scandinavia in conjunction with the EISCAT incoherent scatter radar. The tomographic reconstructions are compared with the incoherent scatter electron density images of the same region of the ionosphere.« less

  1. Intra-operative multi-site stimulation: Expanding methodology for cortical brain mapping of language functions

    PubMed Central

    Korn, Akiva; Kirschner, Adi; Perry, Daniella; Hendler, Talma; Ram, Zvi

    2017-01-01

    Direct cortical stimulation (DCS) is considered the gold-standard for functional cortical mapping during awake surgery for brain tumor resection. DCS is performed by stimulating one local cortical area at a time. We present a feasibility study using an intra-operative technique aimed at improving our ability to map brain functions which rely on activity in distributed cortical regions. Following standard DCS, Multi-Site Stimulation (MSS) was performed in 15 patients by applying simultaneous cortical stimulations at multiple locations. Language functioning was chosen as a case-cognitive domain due to its relatively well-known cortical organization. MSS, performed at sites that did not produce disruption when applied in a single stimulation point, revealed additional language dysfunction in 73% of the patients. Functional regions identified by this technique were presumed to be significant to language circuitry and were spared during surgery. No new neurological deficits were observed in any of the patients following surgery. Though the neuro-electrical effects of MSS need further investigation, this feasibility study may provide a first step towards sophistication of intra-operative cortical mapping. PMID:28700619

  2. Intra-operative multi-site stimulation: Expanding methodology for cortical brain mapping of language functions.

    PubMed

    Gonen, Tal; Gazit, Tomer; Korn, Akiva; Kirschner, Adi; Perry, Daniella; Hendler, Talma; Ram, Zvi

    2017-01-01

    Direct cortical stimulation (DCS) is considered the gold-standard for functional cortical mapping during awake surgery for brain tumor resection. DCS is performed by stimulating one local cortical area at a time. We present a feasibility study using an intra-operative technique aimed at improving our ability to map brain functions which rely on activity in distributed cortical regions. Following standard DCS, Multi-Site Stimulation (MSS) was performed in 15 patients by applying simultaneous cortical stimulations at multiple locations. Language functioning was chosen as a case-cognitive domain due to its relatively well-known cortical organization. MSS, performed at sites that did not produce disruption when applied in a single stimulation point, revealed additional language dysfunction in 73% of the patients. Functional regions identified by this technique were presumed to be significant to language circuitry and were spared during surgery. No new neurological deficits were observed in any of the patients following surgery. Though the neuro-electrical effects of MSS need further investigation, this feasibility study may provide a first step towards sophistication of intra-operative cortical mapping.

  3. Multiple-wavelength neutron holography with pulsed neutrons

    PubMed Central

    Hayashi, Kouichi; Ohoyama, Kenji; Happo, Naohisa; Matsushita, Tomohiro; Hosokawa, Shinya; Harada, Masahide; Inamura, Yasuhiro; Nitani, Hiroaki; Shishido, Toetsu; Yubuta, Kunio

    2017-01-01

    Local structures around impurities in solids provide important information for understanding the mechanisms of material functions, because most of them are controlled by dopants. For this purpose, the x-ray absorption fine structure method, which provides radial distribution functions around specific elements, is most widely used. However, a similar method using neutron techniques has not yet been developed. If one can establish a method of local structural analysis with neutrons, then a new frontier of materials science can be explored owing to the specific nature of neutron scattering—that is, its high sensitivity to light elements and magnetic moments. Multiple-wavelength neutron holography using the time-of-flight technique with pulsed neutrons has great potential to realize this. We demonstrated multiple-wavelength neutron holography using a Eu-doped CaF2 single crystal and obtained a clear three-dimensional atomic image around trivalent Eu substituted for divalent Ca, revealing an interesting feature of the local structure that allows it to maintain charge neutrality. The new holography technique is expected to provide new information on local structures using the neutron technique. PMID:28835917

  4. Multiple-wavelength neutron holography with pulsed neutrons.

    PubMed

    Hayashi, Kouichi; Ohoyama, Kenji; Happo, Naohisa; Matsushita, Tomohiro; Hosokawa, Shinya; Harada, Masahide; Inamura, Yasuhiro; Nitani, Hiroaki; Shishido, Toetsu; Yubuta, Kunio

    2017-08-01

    Local structures around impurities in solids provide important information for understanding the mechanisms of material functions, because most of them are controlled by dopants. For this purpose, the x-ray absorption fine structure method, which provides radial distribution functions around specific elements, is most widely used. However, a similar method using neutron techniques has not yet been developed. If one can establish a method of local structural analysis with neutrons, then a new frontier of materials science can be explored owing to the specific nature of neutron scattering-that is, its high sensitivity to light elements and magnetic moments. Multiple-wavelength neutron holography using the time-of-flight technique with pulsed neutrons has great potential to realize this. We demonstrated multiple-wavelength neutron holography using a Eu-doped CaF 2 single crystal and obtained a clear three-dimensional atomic image around trivalent Eu substituted for divalent Ca, revealing an interesting feature of the local structure that allows it to maintain charge neutrality. The new holography technique is expected to provide new information on local structures using the neutron technique.

  5. Vibrational study and Natural Bond Orbital analysis of serotonin in monomer and dimer states by density functional theory

    NASA Astrophysics Data System (ADS)

    Borah, Mukunda Madhab; Devi, Th. Gomti

    2018-06-01

    The vibrational spectral analysis of Serotonin and its dimer were carried out using the Fourier Transform Infrared (FTIR) and Raman techniques. The equilibrium geometrical parameters, harmonic vibrational wavenumbers, Frontier orbitals, Mulliken atomic charges, Natural Bond orbitals, first order hyperpolarizability and some optimized energy parameters were computed by density functional theory with 6-31G(d,p) basis set. The detailed analysis of the vibrational spectra have been carried out by computing Potential Energy Distribution (PED, %) with the help of Vibrational Energy Distribution Analysis (VEDA) program. The second order delocalization energies E(2) confirms the occurrence of intramolecular Charge Transfer (ICT) within the molecule. The computed wavenumbers of Serotonin monomer and dimer were found in good agreement with the experimental Raman and IR values.

  6. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic

    PubMed Central

    YOKOYAMA, Jun’ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case. PMID:25504231

  7. Surface-functionalized cockle shell–based calcium carbonate aragonite polymorph as a drug nanocarrier

    PubMed Central

    Mohd Abd Ghafar, Syairah Liyana; Hussein, Mohd Zobir; Rukayadi, Yaya; Abu Bakar Zakaria, Md Zuki

    2017-01-01

    Calcium carbonate aragonite polymorph nanoparticles derived from cockle shells were prepared using surface functionalization method followed by purification steps. Size, morphology, and surface properties of the nanoparticles were characterized using transmission electron microscopy, field emission scanning electron microscopy, dynamic light scattering, zetasizer, X-ray powder diffraction, and Fourier transform infrared spectrometry techniques. The potential of surface-functionalized calcium carbonate aragonite polymorph nanoparticle as a drug-delivery agent were assessed through in vitro drug-loading test and drug-release test. Transmission electron microscopy, field emission scanning electron microscopy, and particle size distribution analyses revealed that size, morphology, and surface characterization had been improved after surface functionalization process. Zeta potential of the nanoparticles was found to be increased, thereby demonstrating better dispersion among the nanoparticles. Purification techniques showed a further improvement in the overall distribution of nanoparticles toward more refined size ranges <100 nm, which specifically favored drug-delivery applications. The purity of the aragonite phase and their chemical analyses were verified by X-ray powder diffraction and Fourier transform infrared spectrometry studies. In vitro biological response of hFOB 1.19 osteoblast cells showed that surface functionalization could improve the cytotoxicity of cockle shell–based calcium carbonate aragonite nanocarrier. The sample was also sensitive to pH changes and demonstrated good abilities to load and sustain in vitro drug. This study thus indicates that calcium carbonate aragonite polymorph nanoparticles derived from cockle shells, a natural biomaterial, with modified surface characteristics are promising and can be applied as efficient carriers for drug delivery. PMID:28572724

  8. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  9. A simple molecular orbital treatment of current distributions in quantum transport through molecular junctions

    NASA Astrophysics Data System (ADS)

    Jhan, Sin-Mu; Jin, Bih-Yaw

    2017-11-01

    A simple molecular orbital treatment of local current distributions inside single molecular junctions is developed in this paper. Using the first-order perturbation theory and nonequilibrium Green's function techniques in the framework of Hückel theory, we show that the leading contributions to local current distributions are directly proportional to the off-diagonal elements of transition density matrices. Under the orbital approximation, the major contributions to local currents come from a few dominant molecular orbital pairs which are mixed by the interactions between the molecule and electrodes. A few simple molecular junctions consisting of single- and multi-ring conjugated systems are used to demonstrate that local current distributions inside molecular junctions can be decomposed by partial sums of a few leading contributing transition density matrices.

  10. Statistical detection of patterns in unidimensional distributions by continuous wavelet transforms

    NASA Astrophysics Data System (ADS)

    Baluev, R. V.

    2018-04-01

    Objective detection of specific patterns in statistical distributions, like groupings or gaps or abrupt transitions between different subsets, is a task with a rich range of applications in astronomy: Milky Way stellar population analysis, investigations of the exoplanets diversity, Solar System minor bodies statistics, extragalactic studies, etc. We adapt the powerful technique of the wavelet transforms to this generalized task, making a strong emphasis on the assessment of the patterns detection significance. Among other things, our method also involves optimal minimum-noise wavelets and minimum-noise reconstruction of the distribution density function. Based on this development, we construct a self-closed algorithmic pipeline aimed to process statistical samples. It is currently applicable to single-dimensional distributions only, but it is flexible enough to undergo further generalizations and development.

  11. Instantaneous phase estimation to measure weak velocity variations: application to noise correlation on seismic data at the exploration scale

    NASA Astrophysics Data System (ADS)

    Corciulo, M.; Roux, P.; Campillo, M.; Dubucq, D.

    2010-12-01

    Passive imaging from noise cross-correlation is a consolidated analysis applied at continental and regional scale whereas its use at local scale for seismic exploration purposes is still uncertain. The development of passive imaging by cross-correlation analysis is based on the extraction of the Green’s function from seismic noise data. In a completely random field in time and space, the cross-correlation permits to retrieve the complete Green’s function whatever the complexity of the medium. At the exploration scale and at frequency above 2 Hz, the noise sources are not ideally distributed around the stations which strongly affect the extraction of the direct arrivals from the noise cross-correlation process. In order to overcome this problem, the coda waves extracted from noise correlation could be useful. Coda waves describe long and scattered paths sampling the medium in different ways such that they become sensitive to weak velocity variations without being dependent on the noise source distribution. Indeed, scatters in the medium behave as a set of secondary noise sources which randomize the spatial distribution of noise sources contributing to the coda waves in the correlation process. We developed a new technique to measure weak velocity changes based on the computation of the local phase variations (instantaneous phase variation or IPV) of the cross-correlated signals. This newly-developed technique takes advantage from the doublet and stretching techniques classically used to monitor weak velocity variation from coda waves. We apply IPV to data acquired in Northern America (Canada) on a 1-km side square seismic network laid out by 397 stations. Data used to study temporal variations are cross-correlated signals computed on 10-minutes ambient noise in the frequency band 2-5 Hz. As the data set was acquired over five days, about 660 files are processed to perform a complete temporal analysis for each stations pair. The IPV permits to estimate the phase shift all over the signal length without any assumption on the medium velocity. The instantaneous phase is computed using the Hilbert transform of the signal. For each stations pair, we measure the phase difference between successive correlation functions calculated for 10 minutes of ambient noise. We then fit the instantaneous phase shift using a first-order polynomial function. The measure of the velocity variation corresponds to the slope of this fit. Compared to other techniques, the advantage of IPV is a very fast procedure which efficiently provides the measure of velocity variation on large data sets. Both experimental results and numerical tests on synthetic signals will be presented to assess the reliability of the IPV technique, with comparison to the doublet and stretching methods.

  12. Quantification of spatial distribution and spread of bacteria in soil at microscale

    NASA Astrophysics Data System (ADS)

    Juyal, Archana; Eickhorst, Thilo; Falconer, Ruth; Baveye, Philippe; Otten, Wilfred

    2015-04-01

    Soil bacteria play an essential role in functioning of ecosystems and maintaining of biogeochemical cycles. Soil is a complex heterogeneous environment comprising of highly variable and dynamic micro-habitats that have significant impacts on the growth and activity of resident microbiota including bacteria and fungi. Bacteria occupy a very small portion of available pore space in soil which demonstrates that their spatial arrangement in soil has a huge impact on the contact to their target and on the way they interact to carry out their functions. Due to limitation of techniques, there is scant information on spatial distribution of indigenous or introduced bacteria at microhabitat scale. There is a need to understand the interaction between soil structure and microorganisms including fungi for ecosystem-level processes such as carbon sequestration and improving the predictive models for soil management. In this work, a combination of techniques was used including X-ray CT to characterize the soil structure and in-situ detection via fluorescence microscopy to visualize and quantify bacteria in soil thin sections. Pseudomonas fluorescens bacteria were introduced in sterilized soil of aggregate size 1-2 mm and packed at bulk-densities 1.3 g cm-3 and 1.5 g cm-3. A subset of samples was fixed with paraformaldehyde and subsequently impregnated with resin. DAPI and fluorescence in situ hybridization (FISH) were used to visualize bacteria in thin sections of soil cores by epifluorescence microscopy to enumerate spatial distribution of bacteria in soil. The pore geometry of soil was quantified after X-ray microtomography scanning. The distribution of bacteria introduced locally reduced significantly (P

  13. Advances in functional X-ray imaging techniques and contrast agents

    PubMed Central

    Chen, Hongyu; Rogalski, Melissa M.

    2012-01-01

    X-rays have been used for non-invasive high-resolution imaging of thick biological specimens since their discovery in 1895. They are widely used for structural imaging of bone, metal implants, and cavities in soft tissue. Recently, a number of new contrast methodologies have emerged which are expanding X-ray’s biomedical applications to functional as well as structural imaging. These techniques are promising to dramatically improve our ability to study in situ biochemistry and disease pathology. In this review, we discuss how X-ray absorption, X-ray fluorescence, and X-ray excited optical luminescence can be used for physiological, elemental, and molecular imaging of vasculature, tumours, pharmaceutical distribution, and the surface of implants. Imaging of endogenous elements, exogenous labels, and analytes detected with optical indicators will be discussed. PMID:22962667

  14. Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum

    NASA Astrophysics Data System (ADS)

    Weitzel, Nils; Hense, Andreas; Ohlwein, Christian

    2017-04-01

    Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were performed in the PMIP3 project. The proxy data syntheses consist either of raw pollen data or of normally distributed climate data from preprocessed proxy records. Future extensions of our method contain the inclusion of other proxy types (transfer functions), the implementation of other spatial interpolation techniques, the use of age uncertainties, and the extension to spatio-temporal reconstructions of the last deglaciation. Our work is part of the PalMod project funded by the German Federal Ministry of Education and Science (BMBF).

  15. Techniques for Interface Stress Measurements within Prosthetic Sockets of Transtibial Amputees: A Review of the Past 50 Years of Research.

    PubMed

    Al-Fakih, Ebrahim A; Abu Osman, Noor Azuan; Mahmad Adikan, Faisal Rafiq

    2016-07-20

    The distribution of interface stresses between the residual limb and prosthetic socket of a transtibial amputee has been considered as a direct indicator of the socket quality fit and comfort. Therefore, researchers have been very interested in quantifying these interface stresses in order to evaluate the extent of any potential damage caused by the socket to the residual limb tissues. During the past 50 years a variety of measurement techniques have been employed in an effort to identify sites of excessive stresses which may lead to skin breakdown, compare stress distributions in various socket designs, and evaluate interface cushioning and suspension systems, among others. The outcomes of such measurement techniques have contributed to improving the design and fitting of transtibial sockets. This article aims to review the operating principles, advantages, and disadvantages of conventional and emerging techniques used for interface stress measurements inside transtibial sockets. It also reviews and discusses the evolution of different socket concepts and interface stress investigations conducted in the past five decades, providing valuable insights into the latest trends in socket designs and the crucial considerations for effective stress measurement tools that lead to a functional prosthetic socket.

  16. Techniques for Interface Stress Measurements within Prosthetic Sockets of Transtibial Amputees: A Review of the Past 50 Years of Research

    PubMed Central

    Al-Fakih, Ebrahim A.; Abu Osman, Noor Azuan; Mahmad Adikan, Faisal Rafiq

    2016-01-01

    The distribution of interface stresses between the residual limb and prosthetic socket of a transtibial amputee has been considered as a direct indicator of the socket quality fit and comfort. Therefore, researchers have been very interested in quantifying these interface stresses in order to evaluate the extent of any potential damage caused by the socket to the residual limb tissues. During the past 50 years a variety of measurement techniques have been employed in an effort to identify sites of excessive stresses which may lead to skin breakdown, compare stress distributions in various socket designs, and evaluate interface cushioning and suspension systems, among others. The outcomes of such measurement techniques have contributed to improving the design and fitting of transtibial sockets. This article aims to review the operating principles, advantages, and disadvantages of conventional and emerging techniques used for interface stress measurements inside transtibial sockets. It also reviews and discusses the evolution of different socket concepts and interface stress investigations conducted in the past five decades, providing valuable insights into the latest trends in socket designs and the crucial considerations for effective stress measurement tools that lead to a functional prosthetic socket. PMID:27447646

  17. Development of a New Time-Resolved Laser-Induced Fluorescence Technique

    NASA Astrophysics Data System (ADS)

    Durot, Christopher; Gallimore, Alec

    2012-10-01

    We are developing a time-resolved laser-induced fluorescence (LIF) technique to interrogate the ion velocity distribution function (VDF) of EP thruster plumes down to the microsecond time scale. Better measurements of dynamic plasma processes will lead to improvements in simulation and prediction of thruster operation and erosion. We present the development of the new technique and results of initial tests. Signal-to-noise ratio (SNR) is often a challenge for LIF studies, and it is only more challenging for time-resolved measurements since a lock-in amplifier cannot be used with a long time constant. The new system uses laser modulation on the order of MHz, which enables the use of electronic filtering and phase-sensitive detection to improve SNR while preserving time-resolved information. Statistical averaging over many cycles to further improve SNR is done in the frequency domain. This technique can have significant advantages, including (1) larger spatial maps enabled by shorter data acquisition time and (2) the ability to average data without creating a phase reference by modifying the thruster operating condition with a periodic cutoff in discharge current, which can modify the ion velocity distribution.

  18. Immunogold scanning electron microscopy can reveal the polysaccharide architecture of xylem cell walls

    PubMed Central

    Sun, Yuliang; Juzenas, Kevin

    2017-01-01

    Abstract Immunofluorescence microscopy (IFM) and immunogold transmission electron microscopy (TEM) are the two main techniques commonly used to detect polysaccharides in plant cell walls. Both are important in localizing cell wall polysaccharides, but both have major limitations, such as low resolution in IFM and restricted sample size for immunogold TEM. In this study, we have developed a robust technique that combines immunocytochemistry with scanning electron microscopy (SEM) to study cell wall polysaccharide architecture in xylem cells at high resolution over large areas of sample. Using multiple cell wall monoclonal antibodies (mAbs), this immunogold SEM technique reliably localized groups of hemicellulosic and pectic polysaccharides in the cell walls of five different xylem structures (vessel elements, fibers, axial and ray parenchyma cells, and tyloses). This demonstrates its important advantages over the other two methods for studying cell wall polysaccharide composition and distribution in these structures. In addition, it can show the three-dimensional distribution of a polysaccharide group in the vessel lateral wall and the polysaccharide components in the cell wall of developing tyloses. This technique, therefore, should be valuable for understanding the cell wall polysaccharide composition, architecture and functions of diverse cell types. PMID:28398585

  19. Motor unit activity within the depth of the masseter characterized by an adapted scanning EMG technique.

    PubMed

    van Dijk, J P; Eiglsperger, U; Hellmann, D; Giannakopoulos, N N; McGill, K C; Schindler, H J; Lapatki, B G

    2016-09-01

    To study motor unit activity in the medio-lateral extension of the masseter using an adapted scanning EMG technique that allows studying the territories of multiple motor units (MUs) in one scan. We studied the m. masseter of 10 healthy volunteers in whom two scans were performed. A monopolar scanning needle and two pairs of fine-wire electrodes were inserted into the belly of the muscle. The signals of the fine wire electrodes were decomposed into the contribution of single MUs and used as a trigger for the scanning needle. In this manner multiple MU territory scans were obtained simultaneously. We determined 161 MU territories. The maximum number of territories obtained in one scan was 15. The median territory size was 4.0mm. Larger and smaller MU territories were found throughout the muscle. The presented technique showed its feasibility in obtaining multiple MU territories in one scan. MUs were active throughout the depth of the muscle. The distribution of electrical and anatomical size of MUs substantiates the heterogeneous distribution of MUs throughout the muscle volume. This distributed activity may be of functional significance for the stabilization of the muscle during force generation. Copyright © 2016 International Federation of Clinical Neurophysiology. All rights reserved.

  20. Extending the Distributed Lag Model framework to handle chemical mixtures.

    PubMed

    Bello, Ghalib A; Arora, Manish; Austin, Christine; Horton, Megan K; Wright, Robert O; Gennings, Chris

    2017-07-01

    Distributed Lag Models (DLMs) are used in environmental health studies to analyze the time-delayed effect of an exposure on an outcome of interest. Given the increasing need for analytical tools for evaluation of the effects of exposure to multi-pollutant mixtures, this study attempts to extend the classical DLM framework to accommodate and evaluate multiple longitudinally observed exposures. We introduce 2 techniques for quantifying the time-varying mixture effect of multiple exposures on an outcome of interest. Lagged WQS, the first technique, is based on Weighted Quantile Sum (WQS) regression, a penalized regression method that estimates mixture effects using a weighted index. We also introduce Tree-based DLMs, a nonparametric alternative for assessment of lagged mixture effects. This technique is based on the Random Forest (RF) algorithm, a nonparametric, tree-based estimation technique that has shown excellent performance in a wide variety of domains. In a simulation study, we tested the feasibility of these techniques and evaluated their performance in comparison to standard methodology. Both methods exhibited relatively robust performance, accurately capturing pre-defined non-linear functional relationships in different simulation settings. Further, we applied these techniques to data on perinatal exposure to environmental metal toxicants, with the goal of evaluating the effects of exposure on neurodevelopment. Our methods identified critical neurodevelopmental windows showing significant sensitivity to metal mixtures. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Reconstruction method for inversion problems in an acoustic tomography based temperature distribution measurement

    NASA Astrophysics Data System (ADS)

    Liu, Sha; Liu, Shi; Tong, Guowei

    2017-11-01

    In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.

  2. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  3. Action-angle formulation of generalized, orbit-based, fast-ion diagnostic weight functions

    NASA Astrophysics Data System (ADS)

    Stagner, L.; Heidbrink, W. W.

    2017-09-01

    Due to the usually complicated and anisotropic nature of the fast-ion distribution function, diagnostic velocity-space weight functions, which indicate the sensitivity of a diagnostic to different fast-ion velocities, are used to facilitate the analysis of experimental data. Additionally, when velocity-space weight functions are discretized, a linear equation relating the fast-ion density and the expected diagnostic signal is formed. In a technique known as velocity-space tomography, many measurements can be combined to create an ill-conditioned system of linear equations that can be solved using various computational methods. However, when velocity-space weight functions (which by definition ignore spatial dependencies) are used, velocity-space tomography is restricted, both by the accuracy of its forward model and also by the availability of spatially overlapping diagnostic measurements. In this work, we extend velocity-space weight functions to a full 6D generalized coordinate system and then show how to reduce them to a 3D orbit-space without loss of generality using an action-angle formulation. Furthermore, we show how diagnostic orbit-weight functions can be used to infer the full fast-ion distribution function, i.e., orbit tomography. In depth derivations of orbit weight functions for the neutron, neutral particle analyzer, and fast-ion D-α diagnostics are also shown.

  4. Determination of statistics for any rotation of axes of a bivariate normal elliptical distribution. [of wind vector components

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Crutcher, H. L.

    1976-01-01

    Transformation of statistics from a dimensional set to another dimensional set involves linear functions of the original set of statistics. Similarly, linear functions will transform statistics within a dimensional set such that the new statistics are relevant to a new set of coordinate axes. A restricted case of the latter is the rotation of axes in a coordinate system involving any two correlated random variables. A special case is the transformation for horizontal wind distributions. Wind statistics are usually provided in terms of wind speed and direction (measured clockwise from north) or in east-west and north-south components. A direct application of this technique allows the determination of appropriate wind statistics parallel and normal to any preselected flight path of a space vehicle. Among the constraints for launching space vehicles are critical values selected from the distribution of the expected winds parallel to and normal to the flight path. These procedures are applied to space vehicle launches at Cape Kennedy, Florida.

  5. Gaussian functional regression for output prediction: Model assimilation and experimental design

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.

    2016-03-01

    In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.

  6. Self-calibration of photometric redshift scatter in weak-lensing surveys

    DOE PAGES

    Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary

    2010-06-11

    Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less

  7. From Maximum Entropy Models to Non-Stationarity and Irreversibility

    NASA Astrophysics Data System (ADS)

    Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar

    The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.

  8. Localization and function of ATP-sensitive potassium channels in human skeletal muscle.

    PubMed

    Nielsen, Jens Jung; Kristensen, Michael; Hellsten, Ylva; Bangsbo, Jens; Juel, Carsten

    2003-02-01

    The present study investigated the localization of ATP-sensitive K+ (KATP) channels in human skeletal muscle and the functional importance of these channels for human muscle K+ distribution at rest and during muscle activity. Membrane fractionation based on the giant vesicle technique or the sucrose-gradient technique in combination with Western blotting demonstrated that the KATP channels are mainly located in the sarcolemma. This localization was confirmed by immunohistochemical measurements. With the microdialysis technique, it was demonstrated that local application of the KATP channel inhibitor glibenclamide reduced (P < 0.05) interstitial K+ at rest from approximately 4.5 to 4.0 mM, whereas the concentration in the control leg remained constant. Glibenclamide had no effect on the interstitial K+ accumulation during knee-extensor exercise at a power output of 60 W. In contrast to in vitro conditions, the present study demonstrated that under in vivo conditions the KATP channels are active at rest and contribute to the accumulation of interstitial K+.

  9. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  10. A High-Speed, Real-Time Visualization and State Estimation Platform for Monitoring and Control of Electric Distribution Systems: Implementation and Field Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta

    Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less

  11. Magnetotail Structure and its Internal Particle Dynamics During Northward IMF

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, M.; Raeder, J.; El-Alaoui, M.; Peroomian, V.

    1998-01-01

    This study uses Global magnetohydrodynamic (MHD) simulations driven by solar wind data along with Geotail observations of the magnetotail to investigate the magnetotail's response to changes in the interplanetary magnetic field (IMF); observed events used in the study occurred on March 29, 1993 and February 9, 1995. For events from February 9, 1995, we also use the time-dependent MHD magnetic and electric fields and the large-scale kinetic (LSK) technique to examine changes in the Geotail ion velocity distributions. Our MHD simulation shows that on March 29, 1993, during a long period of steady northward IMF, the tail was strongly squeezed and twisted around the Sun-Earth axis in response to variations in the IMF B(sub y) component. The mixed (magnetotail and magnetosheath) plasma observed by Geotail results from the spacecraft's close proximity to the magnetopause and its frequent crossings of this boundary. In our second example (February 9, 1995) the IMF was also steady and northward, and in addition had a significant B(sub y) component. Again the magnetotail was twisted, but not as strongly as on March 29, 1993. The Geotail spacecraft, located approximately 30 R(sub E) downtail, observed highly structured ion distribution functions. Using the time-dependent LSK technique, we investigate the ion sources and acceleration mechanisms affecting the Geotail distribution functions during this interval. At 1325 UT most ions are found to enter the magnetosphere on the dusk side earthward of Geotail with a secondary source on the dawn side in the low latitude boundary layer (LLBL). A small percentage come from the ionosphere. By 1347 UT the majority of the ions come from the dawn side LLBL. The distribution functions measured during the later time interval are much warmer, mainly because particles reaching the spacecraft from the dawn side are affected by nonadiabatic scattering and acceleration in the neutral sheet.

  12. Magnetotail Structure and its Internal Particle Dynamics During Northward IMF

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalia, M.; El-Alaoui, M.; Peroomian, V.

    1998-01-01

    This study uses Global magnetohydrodynamic (MHD) simulations driven by solar wind data along with Geotail observations of the magnetotail to investigate the magnetotail's response to changes in the interplanetary magnetic field (IMF); observed events used in the study occurred on March 29, 1993 and February 9, 1995. For events from February 9, 1995, we also use the time-dependent MHD magnetic and electric fields and the large-scale kinetic (LSK) technique to examine changes in the Geotail ion velocity distributions. Our MHD simulation shows that on March 29, 1993, during a long period of steady northward IMF, the tail was strongly squeezed and twisted around the Sun-Earth axis in response to variations in the IMF B(sub y) component. The mixed (magnetotail and magnetosheath) plasma observed by Geotail results from the spacecraft's close proximity to the magnetopause and its frequent crossings of this boundary. In our second example (February 9, 1995) the IMF was also steady and northward, and in addition had a significant B(sub y) component. Again the magnetotail was twisted, but not as strongly as on March 29, 1993. The Geotail spacecraft, located approximately 30 R(sub E) downtail, observed highly structured ion distribution functions. Using the time-dependent LSK technique, we investigate the ion sources and acceleration mechanisms affecting the Geotail distribution functions during this interval. At 1325 UT most ions are found to enter the magnetosphere on the dusk side earthward of Geotail with a secondary source on the dawn side in the low latitude boundary layer (LLBL). A small percentage come from the ionosphere. By 1347 UT the majority of the ions come from the dawn side LLBL. The distribution functions measured during the later time interval are much warmer, mainly because particles reaching the spacecraft from the dawnside are affected by nonadiabatic scattering and acceleration in the neutral sheet.

  13. Analysis of Lunar Seismic Signals: Determination of Instrumental Parameters and Seismic Velocity Distributions. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Horvath, P.

    1979-01-01

    Inverse filters were designed to correct the effect of instrumental response, coupling of the seismometer to the ground, and near surface structures. The least squares technique was used to determine the instrumental constants and the transfer functions of the long period lunar seismographs. The influence of noise and the results of these calculations are discussed.

  14. Estimation of discontinuous coefficients in parabolic systems: Applications to reservoir simulation

    NASA Technical Reports Server (NTRS)

    Lamm, P. D.

    1984-01-01

    Spline based techniques for estimating spatially varying parameters that appear in parabolic distributed systems (typical of those found in reservoir simulation problems) are presented. The problem of determining discontinuous coefficients, estimating both the functional shape and points of discontinuity for such parameters is discussed. Convergence results and a summary of numerical performance of the resulting algorithms are given.

  15. Radiation dose in temporomandibular joint zonography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coucke, M.E.; Bourgoignie, R.R.; Dermaut, L.R.

    1991-06-01

    Temporomandibular joint morphology and function can be evaluated by panoramic zonography. Thermoluminescent dosimetry was applied to evaluate the radiation dose to predetermined sites on a phantom eye, thyroid, pituitary, and parotid, and the dose distribution on the skin of the head and neck when the TMJ program of the Zonarc panoramic x-ray unit was used. Findings are discussed with reference to similar radiographic techniques.

  16. The impact of vessel size on vulnerability curves: data and models for within-species variability in saplings of aspen, Populus tremuloides Michx.

    PubMed

    Cai, Jing; Tyree, Melvin T

    2010-07-01

    The objective of this study was to quantify the relationship between vulnerability to cavitation and vessel diameter within a species. We measured vulnerability curves (VCs: percentage loss hydraulic conductivity versus tension) in aspen stems and measured vessel-size distributions. Measurements were done on seed-grown, 4-month-old aspen (Populus tremuloides Michx) grown in a greenhouse. VCs of stem segments were measured using a centrifuge technique and by a staining technique that allowed a VC to be constructed based on vessel diameter size-classes (D). Vessel-based VCs were also fitted to Weibull cumulative distribution functions (CDF), which provided best-fit values of Weibull CDF constants (c and b) and P(50) = the tension causing 50% loss of hydraulic conductivity. We show that P(50) = 6.166D(-0.3134) (R(2) = 0.995) and that b and 1/c are both linear functions of D with R(2) > 0.95. The results are discussed in terms of models of VCs based on vessel D size-classes and in terms of concepts such as the 'pit area hypothesis' and vessel pathway redundancy.

  17. Model Checking Techniques for Assessing Functional Form Specifications in Censored Linear Regression Models.

    PubMed

    León, Larry F; Cai, Tianxi

    2012-04-01

    In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.

  18. Eye investigation with optical microradar techniques

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.; Kurashov, Vitalij N.; Chyzh, Igor H.

    1997-08-01

    Many problems exist in ophthalmology, where accurate measurements of eye structure and its parameters can be provided using optical radar concept is of remote sensing. Coherent and non-coherent approaches are reviewed aiming cornea shape measurement and measurement of aberration distribution in the elements and media of an eye. Coherent radar techniques are analyzed taking into account non- reciprocity of eye media and anisoplanatism of the fovea, that results in an exiting image being not an auto- correlation of the point-spread function of a single pass, even in the approximation of spatial invariance of the system. It is found, that aberrations of the cornea and lens are not additive, and may not be brought to summary aberrations on the entrance aperture of the lens. Anisoplanatism of the fovea and its roughness lead to low degree of coherence in scattered light. To estimate the result of measurements, methodology has been developed using Zernike polynomials expansions. Aberration distributions were gotten from measurements in 16 points of an eye situated on two concentric circles. Wave aberration functions have been approximated using least-square criterion. Thus, all data were provided necessary for cornea ablation with PRK procedure.

  19. Continuous wave cavity ring down spectroscopy measurements of velocity distribution functions of argon ions in a helicon plasma.

    PubMed

    Chakraborty Thakur, Saikat; McCarren, Dustin; Carr, Jerry; Scime, Earl E

    2012-02-01

    We report continuous wave cavity ring down spectroscopy (CW-CRDS) measurements of ion velocity distribution functions (VDFs) in low pressure argon helicon plasma (magnetic field strength of 600 G, T(e) ≈ 4 eV and n ≈ 5 × 10(11) cm(-3)). Laser induced fluorescence (LIF) is routinely used to measure VDFs of argon ions, argon neutrals, helium neutrals, and xenon ions in helicon sources. Here, we describe a CW-CRDS diagnostic based on a narrow line width, tunable diode laser as an alternative technique to measure VDFs in similar regimes but where LIF is inapplicable. Being an ultra-sensitive, cavity enhanced absorption spectroscopic technique; CW-CRDS can also provide a direct quantitative measurement of the absolute metastable state density. The proof of principle CW-CRDS measurements presented here are of the Doppler broadened absorption spectrum of Ar II at 668.6138 nm. Extrapolating from these initial measurements, it is expected that this diagnostic is suitable for neutrals and ions in plasmas ranging in density from 1 × 10(9) cm(-3) to 1 × 10(13) cm(-3) and target species temperatures less than 20 eV.

  20. Validation Tests of Fiber Optic Strain-Based Operational Shape and Load Measurements

    NASA Technical Reports Server (NTRS)

    Bakalyar, John A.; Jutte, Christine

    2012-01-01

    Aircraft design has been progressing toward reduced structural weight to improve fuel efficiency, increase performance, and reduce cost. Lightweight aircraft structures are more flexible than conventional designs and require new design considerations. Intelligent sensing allows for enhanced control and monitoring of aircraft, which enables increased structurally efficiency. The NASA Dryden Flight Research Center (DFRC) has developed an instrumentation system and analysis techniques that combine to make distributed structural measurements practical for lightweight vehicles. Dryden's Fiber Optic Strain Sensing (FOSS) technology enables a multitude of lightweight, distributed surface strain measurements. The analysis techniques, referred to as the Displacement Transfer Functions (DTF) and Load Transfer Functions (LTF), use surface strain values to calculate structural deflections and operational loads. The combined system is useful for real-time monitoring of aeroelastic structures, along with many other applications. This paper describes how the capabilities of the measurement system were demonstrated using subscale test articles that represent simple aircraft structures. Empirical FOSS strain data were used within the DTF to calculate the displacement of the article and within the LTF to calculate bending moments due to loads acting on the article. The results of the tests, accuracy of the measurements, and a sensitivity analysis are presented.

  1. Continuous wave cavity ring down spectroscopy measurements of velocity distribution functions of argon ions in a helicon plasma

    NASA Astrophysics Data System (ADS)

    Chakraborty Thakur, Saikat; McCarren, Dustin; Carr, Jerry; Scime, Earl E.

    2012-02-01

    We report continuous wave cavity ring down spectroscopy (CW-CRDS) measurements of ion velocity distribution functions (VDFs) in low pressure argon helicon plasma (magnetic field strength of 600 G, Te ≈ 4 eV and n ≈ 5 × 1011 cm-3). Laser induced fluorescence (LIF) is routinely used to measure VDFs of argon ions, argon neutrals, helium neutrals, and xenon ions in helicon sources. Here, we describe a CW-CRDS diagnostic based on a narrow line width, tunable diode laser as an alternative technique to measure VDFs in similar regimes but where LIF is inapplicable. Being an ultra-sensitive, cavity enhanced absorption spectroscopic technique; CW-CRDS can also provide a direct quantitative measurement of the absolute metastable state density. The proof of principle CW-CRDS measurements presented here are of the Doppler broadened absorption spectrum of Ar II at 668.6138 nm. Extrapolating from these initial measurements, it is expected that this diagnostic is suitable for neutrals and ions in plasmas ranging in density from 1 × 109 cm-3 to 1 × 1013 cm-3 and target species temperatures less than 20 eV.

  2. Novel dynamic caching for hierarchically distributed video-on-demand systems

    NASA Astrophysics Data System (ADS)

    Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi

    1998-02-01

    It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

  3. Using spatial information about recurrence risk for robust optimization of dose-painting prescription functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Edward T.

    Purpose: To develop a robust method for deriving dose-painting prescription functions using spatial information about the risk for disease recurrence. Methods: Spatial distributions of radiobiological model parameters are derived from distributions of recurrence risk after uniform irradiation. These model parameters are then used to derive optimal dose-painting prescription functions given a constant mean biologically effective dose. Results: An estimate for the optimal dose distribution can be derived based on spatial information about recurrence risk. Dose painting based on imaging markers that are moderately or poorly correlated with recurrence risk are predicted to potentially result in inferior disease control when comparedmore » the same mean biologically effective dose delivered uniformly. A robust optimization approach may partially mitigate this issue. Conclusions: The methods described here can be used to derive an estimate for a robust, patient-specific prescription function for use in dose painting. Two approximate scaling relationships were observed: First, the optimal choice for the maximum dose differential when using either a linear or two-compartment prescription function is proportional to R, where R is the Pearson correlation coefficient between a given imaging marker and recurrence risk after uniform irradiation. Second, the predicted maximum possible gain in tumor control probability for any robust optimization technique is nearly proportional to the square of R.« less

  4. Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Knox, Lenora A.

    The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.

  5. Inverse measurement of wall pressure field in flexible-wall wind tunnels using global wall deformation data

    NASA Astrophysics Data System (ADS)

    Brown, Kenneth; Brown, Julian; Patil, Mayuresh; Devenport, William

    2018-02-01

    The Kevlar-wall anechoic wind tunnel offers great value to the aeroacoustics research community, affording the capability to make simultaneous aeroacoustic and aerodynamic measurements. While the aeroacoustic potential of the Kevlar-wall test section is already being leveraged, the aerodynamic capability of these test sections is still to be fully realized. The flexibility of the Kevlar walls suggests the possibility that the internal test section flow may be characterized by precisely measuring small deflections of the flexible walls. Treating the Kevlar fabric walls as tensioned membranes with known pre-tension and material properties, an inverse stress problem arises where the pressure distribution over the wall is sought as a function of the measured wall deflection. Experimental wall deformations produced by the wind loading of an airfoil model are measured using digital image correlation and subsequently projected onto polynomial basis functions which have been formulated to mitigate the impact of measurement noise based on a finite-element study. Inserting analytic derivatives of the basis functions into the equilibrium relations for a membrane, full-field pressure distributions across the Kevlar walls are computed. These inversely calculated pressures, after being validated against an independent measurement technique, can then be integrated along the length of the test section to give the sectional lift of the airfoil. Notably, these first-time results are achieved with a non-contact technique and in an anechoic environment.

  6. Current Status and Future Perspectives of Mass Spectrometry Imaging

    PubMed Central

    Nimesh, Surendra; Mohottalage, Susantha; Vincent, Renaud; Kumarathasan, Prem

    2013-01-01

    Mass spectrometry imaging is employed for mapping proteins, lipids and metabolites in biological tissues in a morphological context. Although initially developed as a tool for biomarker discovery by imaging the distribution of protein/peptide in tissue sections, the high sensitivity and molecular specificity of this technique have enabled its application to biomolecules, other than proteins, even in cells, latent finger prints and whole organisms. Relatively simple, with no requirement for labelling, homogenization, extraction or reconstitution, the technique has found a variety of applications in molecular biology, pathology, pharmacology and toxicology. By discriminating the spatial distribution of biomolecules in serial sections of tissues, biomarkers of lesions and the biological responses to stressors or diseases can be better understood in the context of structure and function. In this review, we have discussed the advances in the different aspects of mass spectrometry imaging processes, application towards different disciplines and relevance to the field of toxicology. PMID:23759983

  7. PROFILE user's guide

    NASA Technical Reports Server (NTRS)

    Collins, L.; Saunders, D.

    1986-01-01

    User information for program PROFILE, an aerodynamics design utility for refining, plotting, and tabulating airfoil profiles is provided. The theory and implementation details for two of the more complex options are also presented. These are the REFINE option, for smoothing curvature in selected regions while retaining or seeking some specified thickness ratio, and the OPTIMIZE option, which seeks a specified curvature distribution. REFINE uses linear techniques to manipulate ordinates via the central difference approximation to second derivatives, while OPTIMIZE works directly with curvature using nonlinear least squares techniques. Use of programs QPLOT and BPLOT is also described, since all of the plots provided by PROFILE (airfoil coordinates, curvature distributions) are achieved via the general purpose QPLOT utility. BPLOT illustrates (again, via QPLOT) the shape functions used by two of PROFILE's options. The programs were designed and implemented for the Applied Aerodynamics Branch at NASA Ames Research Center, Moffett Field, California, and written in FORTRAN and run on a VAX-11/780 under VMS.

  8. Distributed Adaptive Neural Network Output Tracking of Leader-Following High-Order Stochastic Nonlinear Multiagent Systems With Unknown Dead-Zone Input.

    PubMed

    Hua, Changchun; Zhang, Liuliu; Guan, Xinping

    2017-01-01

    This paper studies the problem of distributed output tracking consensus control for a class of high-order stochastic nonlinear multiagent systems with unknown nonlinear dead-zone under a directed graph topology. The adaptive neural networks are used to approximate the unknown nonlinear functions and a new inequality is used to deal with the completely unknown dead-zone input. Then, we design the controllers based on backstepping method and the dynamic surface control technique. It is strictly proved that the resulting closed-loop system is stable in probability in the sense of semiglobally uniform ultimate boundedness and the tracking errors between the leader and the followers approach to a small residual set based on Lyapunov stability theory. Finally, two simulation examples are presented to show the effectiveness and the advantages of the proposed techniques.

  9. Quantifying Uncertainties in the Thermo-Mechanical Properties of Particulate Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Murthy, Pappu L. N.

    1999-01-01

    The present paper reports results from a computational simulation of probabilistic particulate reinforced composite behavior. The approach consists use of simplified micromechanics of particulate reinforced composites together with a Fast Probability Integration (FPI) technique. Sample results are presented for a Al/SiC(sub p)(silicon carbide particles in aluminum matrix) composite. The probability density functions for composite moduli, thermal expansion coefficient and thermal conductivities along with their sensitivity factors are computed. The effect of different assumed distributions and the effect of reducing scatter in constituent properties on the thermal expansion coefficient are also evaluated. The variations in the constituent properties that directly effect these composite properties are accounted for by assumed probabilistic distributions. The results show that the present technique provides valuable information about the scatter in composite properties and sensitivity factors, which are useful to test or design engineers.

  10. Two-boundary grid generation for the solution of the three dimensional compressible Navier-Stokes equations. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Smith, R. E.

    1981-01-01

    A grid generation technique called the two boundary technique is developed and applied for the solution of the three dimensional Navier-Stokes equations. The Navier-Stokes equations are transformed from a cartesian coordinate system to a computational coordinate system, and the grid generation technique provides the Jacobian matrix describing the transformation. The two boundary technique is based on algebraically defining two distinct boundaries of a flow domain and the distribution of the grid is achieved by applying functions to the uniform computational grid which redistribute the computational independent variables and consequently concentrate or disperse the grid points in the physical domain. The Navier-Stokes equations are solved using a MacCormack time-split technique. Grids and supersonic laminar flow solutions are obtained for a family of three dimensional corners and two spike-nosed bodies.

  11. A digital protection system incorporating knowledge based learning

    NASA Astrophysics Data System (ADS)

    Watson, Karan; Russell, B. Don; McCall, Kurt

    A digital system architecture used to diagnoses the operating state and health of electric distribution lines and to generate actions for line protection is presented. The architecture is described functionally and to a limited extent at the hardware level. This architecture incorporates multiple analysis and fault-detection techniques utilizing a variety of parameters. In addition, a knowledge-based decision maker, a long-term memory retention and recall scheme, and a learning environment are described. Preliminary laboratory implementations of the system elements have been completed. Enhanced protection for electric distribution feeders is provided by this system. Advantages of the system are enumerated.

  12. Monitoring of fluid motion in a micromixer by dynamic NMR microscopy.

    PubMed

    Ahola, Susanna; Casanova, Federico; Perlo, Juan; Münnemann, Kerstin; Blümich, Bernhard; Stapf, Siegfried

    2006-01-01

    The velocity distribution of liquid flowing in a commercial micromixer has been determined directly by using pulsed-field gradient NMR. Velocity maps with a spatial resolution of 29 microm x 43 microm were obtained by combining standard imaging gradient units with a homebuilt rectangular surface coil matching the mixer geometry. The technique provides access to mixers and reactors of arbitrary shape regardless of optical transparency. Local heterogeneities in the signal intensity and the velocity pattern were found and serve to investigate the quality and functionality of a micromixer, revealing clogging and inhomogeneous flow distributions.

  13. Dynamical complexity changes during two forms of meditation

    NASA Astrophysics Data System (ADS)

    Li, Jin; Hu, Jing; Zhang, Yinhong; Zhang, Xiaofeng

    2011-06-01

    Detection of dynamical complexity changes in natural and man-made systems has deep scientific and practical meaning. We use the base-scale entropy method to analyze dynamical complexity changes for heart rate variability (HRV) series during specific traditional forms of Chinese Chi and Kundalini Yoga meditation techniques in healthy young adults. The results show that dynamical complexity decreases in meditation states for two forms of meditation. Meanwhile, we detected changes in probability distribution of m-words during meditation and explained this changes using probability distribution of sine function. The base-scale entropy method may be used on a wider range of physiologic signals.

  14. Edge effects in angle-ply composite laminates

    NASA Technical Reports Server (NTRS)

    Hsu, P. W.; Herakovich, C. T.

    1977-01-01

    This paper presents the results of a zeroth-order solution for edge effects in angle-ply composite laminates obtained using perturbation techniques and a limiting free body approach. The general solution for edge effects in laminates of arbitrary angle ply is applied to the special case of a (+ or - 45)s graphite/epoxy laminate. Interlaminar stress distributions are obtained as a function of the laminate thickness-to-width ratio and compared to finite difference results. The solution predicts stable, continuous stress distributions, determines finite maximum tensile interlaminar normal stress and provides mathematical evidence for singular interlaminar shear stresses in (+ or - 45) graphite/epoxy laminates.

  15. Unobtrusive monitoring of heart rate using a cost-effective speckle-based SI-POF remote sensor

    NASA Astrophysics Data System (ADS)

    Pinzón, P. J.; Montero, D. S.; Tapetado, A.; Vázquez, C.

    2017-03-01

    A novel speckle-based sensing technique for cost-effective heart-rate monitoring is demonstrated. This technique detects periodical changes in the spatial distribution of energy on the speckle pattern at the output of a Step-Index Polymer Optical Fiber (SI-POF) lead by using a low-cost webcam. The scheme operates in reflective configuration thus performing a centralized interrogation unit scheme. The prototype has been integrated into a mattress and its functionality has been tested with 5 different patients lying on the mattress in different positions without direct contact with the fiber sensing lead.

  16. Particle Line Assembly/Patterning by Microfluidic AC Electroosmosis

    NASA Astrophysics Data System (ADS)

    Lian, Meng; Islam, Nazmul; Wu, Jie

    2006-04-01

    Recently AC electroosmosis has attracted research interests worldwide. This paper is the first to investigate particle line assembly/patterning by AC electroosmosis. Since AC electroosmotic force has no dependence on particle sizes, this technique is particularly useful for manipulating nanoscale substance, and hopefully constructs functional nanoscale devices. Two types of ACEO devices, in the configurations of planar interdigitated electrodes and parallel plate electrodes, and a biased ACEO technique are studied, which provides added flexibility in particle manipulation and line assembly. The paper also investigates the effects of electrical field distributions on generating microflows for particle assembly. The results are corroborated experimentally.

  17. Design methodology and results evaluation of a heating functionality in modular lab-on-chip systems

    NASA Astrophysics Data System (ADS)

    Streit, Petra; Nestler, Joerg; Shaporin, Alexey; Graunitz, Jenny; Otto, Thomas

    2018-06-01

    Lab-on-a-chip (LoC) systems offer the opportunity of fast and customized biological analyses executed at the ‘point-of-need’ without expensive lab equipment. Some biological processes need a temperature treatment. Therefore, it is important to ensure a defined and stable temperature distribution in the biosensor area. An integrated heating functionality is realized with discrete resistive heating elements including temperature measurement. The focus of this contribution is a design methodology and evaluation technique of the temperature distribution in the biosensor area with regard to the thermal-electrical behaviour of the heat sources. Furthermore, a sophisticated control of the biosensor temperature is proposed. A finite element (FE) model with one and more integrated heat sources in a polymer-based LoC system is used to investigate the impact of the number and arrangement of heating elements on the temperature distribution around the heating elements and in the biosensor area. Based on this model, various LOC systems are designed and fabricated. Electrical characterization of the heat sources and independent temperature measurements with infrared technique are performed to verify the model parameters and prove the simulation approach. The FE model and the proposed methodology is the foundation for optimization and evaluation of new designs with regard to temperature requirements of the biosensor. Furthermore, a linear dependency of the heater temperature on the electric current is demonstrated in the targeted temperature range of 20 °C to 70 °C enabling the usage of the heating functionality for biological reactions requiring a steady-state temperature up to 70 °C. The correlation between heater and biosensor area temperature is derived for a direct control through the heating current.

  18. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  19. Coal lithotypes before and after saturation with CO2; insights from micro- and mesoporosity, fluidity, and functional group distribution

    USGS Publications Warehouse

    Mastalerz, Maria; Drobniak, A.; Walker, R.; Morse, D.

    2010-01-01

    Four lithotypes, vitrain, bright clarain, clarain, and fusain, were hand-picked from the core of the Pennsylvanian Springfield Coal Member (Petersburg Formation) in Illinois. These lithotypes were analyzed petrographically and for meso- and micropore characteristics, functional group distribution using FTIR techniques, and fluidity. High-pressure CO2 adsorption isotherm analyses of these lithotypes were performed and, subsequently, all samples were reanalyzed in order to investigate the effects of CO2. After the high-pressure adsorption isotherm analysis was conducted and the samples were reanalyzed, there was a decrease in BET surface area for vitrain from 31.5m2/g in the original sample to 28.5m2/g, as determined by low-pressure nitrogen adsorption. Bright clarain and clarain recorded a minimal decrease in BET surface area, whereas for fusain there was an increase from 6.6m2/g to 7.9m2/g. Using low-pressure CO2 adsorption techniques, a small decrease in the quantity of the adsorbed CO2 is recorded for vitrain and bright clarain, no difference is observed for clarain, and there is an increase in the quantity of the adsorbed CO2 for fusain. Comparison of the FTIR spectra before and after CO2 injection for all lithotypes showed no differences with respect to functional group distribution, testifying against chemical nature of CO2 adsorption. Gieseler plastometry shows that: 1) softening temperature is higher for the post-CO2 sample (389.5??C vs. 386??C); 2) solidification temperature is lower for the post-CO2 sample (443.5??C vs. 451??C); and 3) the maximum fluidity is significantly lower for the post-CO2 sample (4 ddpm vs. 14 ddpm). ?? 2010 Elsevier B.V.

  20. Physical Selectivity of Molecularly Imprinted polymers evaluated through free volume size distributions derived from Positron Lifetime Spectroscopy

    NASA Astrophysics Data System (ADS)

    Pasang, T.; Ranganathaiah, C.

    2015-06-01

    The technique of imprinting molecules of various sizes in a stable structure of polymer matrix has derived multitudes of applications. Once the template molecule is extracted from the polymer matrix, it leaves behind a cavity which is physically (size and shape) and chemically (functional binding site) compatible to the particular template molecule. Positron Annihilation Lifetime Spectroscopy (PALS) is a well known technique to measure cavity sizes precisely in the nanoscale and is not being used in the field of MIPs effectively. This method is capable of measuring nanopores and hence suitable to understand the physical selectivity of the MIPs better. With this idea in mind, we have prepared molecular imprinted polymers (MIPs) with methacrylicacid (MAA) as monomer and EGDMA as cross linker in different molar ratio for three different size template molecules, viz. 4-Chlorophenol (4CP)(2.29 Å), 2-Nephthol (2NP) (3.36 Å) and Phenolphthalein (PP) (4.47Å). FTIR and the dye chemical reactions are used to confirm the complete extraction of the template molecules from the polymer matrix. The free volume size and its distribution have been derived from the measured o-Ps lifetime spectra. Based on the free volume distribution analysis, the percentage of functional cavities for the three template molecules are determined. Percentage of functional binding cavities for 4-CP molecules has been found out to be 70.2% and the rest are native cavities. Similarly for 2NP it is 81.5% and nearly 100% for PP. Therefore, PALS method proves to be very precise and accurate for determining the physical selectivity of MIPs.

  1. Finite Element Aircraft Simulation of Turbulence

    NASA Technical Reports Server (NTRS)

    McFarland, R. E.

    1997-01-01

    A turbulence model has been developed for realtime aircraft simulation that accommodates stochastic turbulence and distributed discrete gusts as a function of the terrain. This model is applicable to conventional aircraft, V/STOL aircraft, and disc rotor model helicopter simulations. Vehicle angular activity in response to turbulence is computed from geometrical and temporal relationships rather than by using the conventional continuum approximations that assume uniform gust immersion and low frequency responses. By using techniques similar to those recently developed for blade-element rotor models, the angular-rate filters of conventional turbulence models are not required. The model produces rotational rates as well as air mass translational velocities in response to both stochastic and deterministic disturbances, where the discrete gusts and turbulence magnitudes may be correlated with significant terrain features or ship models. Assuming isotropy, a two-dimensional vertical turbulence field is created. A novel Gaussian interpolation technique is used to distribute vertical turbulence on the wing span or lateral rotor disc, and this distribution is used to compute roll responses. Air mass velocities are applied at significant centers of pressure in the computation of the aircraft's pitch and roll responses.

  2. First spin-resolved electron distributions in crystals from combined polarized neutron and X-ray diffraction experiments.

    PubMed

    Deutsch, Maxime; Gillon, Béatrice; Claiser, Nicolas; Gillet, Jean-Michel; Lecomte, Claude; Souhassou, Mohamed

    2014-05-01

    Since the 1980s it has been possible to probe crystallized matter, thanks to X-ray or neutron scattering techniques, to obtain an accurate charge density or spin distribution at the atomic scale. Despite the description of the same physical quantity (electron density) and tremendous development of sources, detectors, data treatment software etc., these different techniques evolved separately with one model per experiment. However, a breakthrough was recently made by the development of a common model in order to combine information coming from all these different experiments. Here we report the first experimental determination of spin-resolved electron density obtained by a combined treatment of X-ray, neutron and polarized neutron diffraction data. These experimental spin up and spin down densities compare very well with density functional theory (DFT) calculations and also confirm a theoretical prediction made in 1985 which claims that majority spin electrons should have a more contracted distribution around the nucleus than minority spin electrons. Topological analysis of the resulting experimental spin-resolved electron density is also briefly discussed.

  3. System of HPC content archiving

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Ivashchenko, A.

    2017-12-01

    This work is aimed to develop a system, that will effectively solve the problem of storing and analyzing files containing text data, by using modern software development tools, techniques and approaches. The main challenge of storing a large number of text documents defined at the problem formulation stage, have to be resolved with such functionality as full text search and document clustering depends on their contents. Main system features could be described with notions of distributed multilevel architecture, flexibility and interchangeability of components, achieved through the standard functionality incapsulation in independent executable modules.

  4. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Song

    CFD (Computational Fluid Dynamics) is a widely used technique in engineering design field. It uses mathematical methods to simulate and predict flow characteristics in a certain physical space. Since the numerical result of CFD computation is very hard to understand, VR (virtual reality) and data visualization techniques are introduced into CFD post-processing to improve the understandability and functionality of CFD computation. In many cases CFD datasets are very large (multi-gigabytes), and more and more interactions between user and the datasets are required. For the traditional VR application, the limitation of computing power is a major factor to prevent visualizing largemore » dataset effectively. This thesis presents a new system designing to speed up the traditional VR application by using parallel computing and distributed computing, and the idea of using hand held device to enhance the interaction between a user and VR CFD application as well. Techniques in different research areas including scientific visualization, parallel computing, distributed computing and graphical user interface designing are used in the development of the final system. As the result, the new system can flexibly be built on heterogeneous computing environment, dramatically shorten the computation time.« less

  6. Continuous Wave Ring-Down Spectroscopy for Velocity Distribution Measurements in Plasma

    NASA Astrophysics Data System (ADS)

    McCarren, Dustin W.

    Cavity Ring-Down Spectroscopy CRDS is a proven, ultra-sensitive, cavity enhanced absorption spectroscopy technique. When combined with a continuous wavelength (CW) diode laser that has a sufficiently narrow line width, the Doppler broadened absorption line, i.e., the velocity distribution functions (VDFs) of the absorbing species, can be measured. Measurements of VDFs can be made using established techniques such as laser induced fluorescence (LIF). However, LIF suffers from the requirement that the initial state of the LIF sequence have a substantial density and that the excitation scheme fluoresces at an easily detectable wavelength. This usually limits LIF to ions and atoms with large metastable state densities for the given plasma conditions. CW-CRDS is considerably more sensitive than LIF and can potentially be applied to much lower density populations of ion and atom states. Also, as a direct absorption technique, CW-CRDS measurements only need to be concerned with the species' absorption wavelength and provide an absolute measure of the line integrated initial state density. Presented in this work are measurements of argon ion and neutral VDFs in a helicon plasma using CW-CRDS and LIF.

  7. Representations and uses of light distribution functions

    NASA Astrophysics Data System (ADS)

    Lalonde, Paul Albert

    1998-11-01

    At their lowest level, all rendering algorithms depend on models of local illumination to define the interplay of light with the surfaces being rendered. These models depend both on the representations of light scattering at a surface due to reflection and to an equal extent on the representation of light sources and light fields. Both emission and reflection have in common that they describe how light leaves a surface as a function of direction. Reflection also depends on an incident light direction. Emission can depend on the position on the light source We call the functions representing emission and reflection light distribution functions (LDF's). There are some difficulties to using measured light distribution functions. The data sets are very large-the size of the data grows with the fourth power of the sampling resolution. For example, a bidirectional reflectance distribution function (BRDF) sampled at five degrees angular resolution, which is arguably insufficient to capture highlights and other high frequency effects in the reflection, can easily require one and a half million samples. Once acquired this data requires some form of interpolation to use them. Any compression method used must be efficient, both in space and in the time required to evaluate the function at a point or over a range of points. This dissertation examines a wavelet representation of light distribution functions that addresses these issues. A data structure is presented that allows efficient reconstruction of LDFs for a given set of parameters, making the wavelet representation feasible for rendering tasks. Texture mapping methods that take advantage of our LDF representations are examined, as well as techniques for filtering LDFs, and methods for using wavelet compressed bidirection reflectance distribution functions (BRDFs) and light sources with Monte Carlo path tracing algorithms. The wavelet representation effectively compresses BRDF and emission data while inducing only a small error in the reconstructed signal. The representation can be used to evaluate efficiently some integrals that appear in shading computation which allows fast, accurate computation of local shading. The representation can be used to represent light fields and is used to reconstruct views of environments interactively from a precomputed set of views. The representation of the BRDF also allows the efficient generation of reflected directions for Monte Carlo array tracing applications. The method can be integrated into many different global illumination algorithms, including ray tracers and wavelet radiosity systems.

  8. Time-sliced perturbation theory for large scale structure I: general formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blas, Diego; Garny, Mathias; Sibiryakov, Sergey

    2016-07-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less

  9. Calculation of photoionization differential cross sections using complex Gauss-type orbitals.

    PubMed

    Matsuzaki, Rei; Yabushita, Satoshi

    2017-09-05

    Accurate theoretical calculation of photoelectron angular distributions for general molecules is becoming an important tool to image various chemical reactions in real time. We show in this article that not only photoionization total cross sections but also photoelectron angular distributions can be accurately calculated using complex Gauss-type orbital (cGTO) basis functions. Our method can be easily combined with existing quantum chemistry techniques including electron correlation effects, and applied to various molecules. The so-called two-potential formula is applied to represent the transition dipole moment from an initial bound state to a final continuum state in the molecular coordinate frame. The two required continuum functions, the zeroth-order final continuum state and the first-order wave function induced by the photon field, have been variationally obtained using the complex basis function method with a mixture of appropriate cGTOs and conventional real Gauss-type orbitals (GTOs) to represent the continuum orbitals as well as the remaining bound orbitals. The complex orbital exponents of the cGTOs are optimized by fitting to the outgoing Coulomb functions. The efficiency of the current method is demonstrated through the calculations of the asymmetry parameters and molecular-frame photoelectron angular distributions of H2+ and H2 . In the calculations of H2 , the static exchange and random phase approximations are employed, and the dependence of the results on the basis functions is discussed. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Modified suture-bridge technique to prevent a marginal dog-ear deformity improves structural integrity after rotator cuff repair.

    PubMed

    Ryu, Keun Jung; Kim, Bang Hyun; Lee, Yohan; Lee, Yoon Seok; Kim, Jae Hwa

    2015-03-01

    The arthroscopic suture-bridge technique has proved to provide biomechanically firm fixation of the torn rotator cuff to the tuberosity by increasing the footprint contact area and pressure. However, a marginal dog-ear deformity is encountered not infrequently when this technique is used, impeding full restoration of the torn cuff. To evaluate the structural and functional outcomes of the use of a modified suture-bridge technique to prevent a marginal dog-ear deformity compared with a conventional suture-bridge method in rotator cuff repair. Cohort study; Level of evidence 2. A consecutive series of 71 patients aged 50 to 65 years who underwent arthroscopic rotator cuff repair for full-thickness medium-sized to massive tears was evaluated. Patients were divided into 2 groups according to repair technique: a conventional suture-bridge technique (34 patients; group A) versus a modified suture-bridge technique to prevent a marginal dog-ear deformity (37 patients; group B). Radiographic evaluations included postoperative cuff integrity using MRI. Functional evaluations included pre- and postoperative range of motion (ROM), pain visual analog scale (VAS), the University of California, Los Angeles (UCLA) shoulder rating scale, the Constant score, and the American Shoulder and Elbow Surgeons (ASES) score. All patients were followed up clinically at a minimum of 1 year. When the 2 surgical techniques were compared, postoperative structural integrity by Sugaya classification showed the distribution of types I:II:III:IV:V to be 4:20:2:4:4 in group A and 20:12:4:0:1 in group B. More subjects in group B had a favorable Sugaya type compared with group A (P < .001). The postoperative healed:retear rate was 26:8 in group A and 36:1 in group B, with a significantly lower retear rate in group B (P = .011). However, there were no significant differences in ROM and all functional outcome scores between the 2 groups postoperatively. When surgical techniques were compared across healed (n = 62) and retear (n = 9) groups, significantly fewer modified suture-bridge technique repairs were found in the retear group (P = .03). There were significant differences between healed and retear groups in functional outcome scores, with worse results in the retear group. A modified suture-bridge technique to prevent a marginal dog-ear deformity provided better structural outcomes than a conventional suture-bridge technique for medium-sized to massive rotator cuff tears. This technique may ultimately provide better functional outcomes by decreasing the retear rate. © 2014 The Author(s).

  11. Determination of potential solar power sites in the United States based upon satellite cloud observations

    NASA Technical Reports Server (NTRS)

    Hiser, H. W.; Senn, H. V.; Bukkapatnam, S. T.; Akyuzlu, K.

    1977-01-01

    The use of cloud images in the visual spectrum from the SMS/GOES geostationary satellites to determine the hourly distribution of sunshine on a mesoscale in the continental United States excluding Alaska is presented. Cloud coverage and density as a function of time of day and season are evaluated through the use of digital data processing techniques. Low density cirrus clouds are less detrimental to solar energy collection than other types; and clouds in the morning and evening are less detrimental than those during midday hours of maximum insolation. Seasonal geographic distributions of cloud cover/sunshine are converted to langleys of solar radiation received at the earth's surface through relationships developed from long term measurements at six widely distributed stations.

  12. The NATO III 5 MHz Distribution System

    NASA Technical Reports Server (NTRS)

    Vulcan, A.; Bloch, M.

    1981-01-01

    A high performance 5 MHz distribution system is described which has extremely low phase noise and jitter characteristics and provides multiple buffered outputs. The system is completely redundant with automatic switchover and is self-testing. Since the 5 MHz reference signals distributed by the NATO III distribution system are used for up-conversion and multiplicative functions, a high degree of phase stability and isolation between outputs is necessary. Unique circuit design and packaging concepts insure that the isolation between outputs is sufficient to quarantee a phase perturbation of less than 0.0016 deg when other outputs are open circuited, short circuited or terminated in 50 ohms. Circuit design techniques include high isolation cascode amplifiers. Negative feedback stabilizes system gain and minimizes circuit phase noise contributions. Balanced lines, in lieu of single ended coaxial transmission media, minimize pickup.

  13. Conditional sampling technique to test the applicability of the Taylor hypothesis for the large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Hussain, A. K. M. F.

    1980-01-01

    Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.

  14. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-04-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.

  15. Nonclassical thermal-state superpositions: Analytical evolution law and decoherence behavior

    NASA Astrophysics Data System (ADS)

    Meng, Xiang-guo; Goan, Hsi-Sheng; Wang, Ji-suo; Zhang, Ran

    2018-03-01

    Employing the integration technique within normal products of bosonic operators, we present normal product representations of thermal-state superpositions and investigate their nonclassical features, such as quadrature squeezing, sub-Poissonian distribution, and partial negativity of the Wigner function. We also analytically and numerically investigate their evolution law and decoherence characteristics in an amplitude-decay model via the variations of the probability distributions and the negative volumes of Wigner functions in phase space. The results indicate that the evolution formulas of two thermal component states for amplitude decay can be viewed as the same integral form as a displaced thermal state ρ(V , d) , but governed by the combined action of photon loss and thermal noise. In addition, the larger values of the displacement d and noise V lead to faster decoherence for thermal-state superpositions.

  16. Using field-particle correlations to study auroral electron acceleration in the LAPD

    NASA Astrophysics Data System (ADS)

    Schroeder, J. W. R.; Howes, G. G.; Skiff, F.; Kletzing, C. A.; Carter, T. A.; Vincena, S.; Dorfman, S.

    2017-10-01

    Resonant nonlinear Alfvén wave-particle interactions are believed to contribute to the acceleration of auroral electrons. Experiments in the Large Plasma Device (LAPD) at UCLA have been performed with the goal of providing the first direct measurement of this nonlinear process. Recent progress includes a measurement of linear fluctuations of the electron distribution function associated with the production of inertial Alfvén waves in the LAPD. These linear measurements have been analyzed using the field-particle correlation technique to study the nonlinear transfer of energy between the Alfvén wave electric fields and the electron distribution function. Results of this analysis indicate collisions alter the resonant signature of the field-particle correlation, and implications for resonant Alfvénic electron acceleration in the LAPD are considered. This work was supported by NSF, DOE, and NASA.

  17. Composite Particle Swarm Optimizer With Historical Memory for Function Optimization.

    PubMed

    Li, Jie; Zhang, JunQi; Jiang, ChangJun; Zhou, MengChu

    2015-10-01

    Particle swarm optimization (PSO) algorithm is a population-based stochastic optimization technique. It is characterized by the collaborative search in which each particle is attracted toward the global best position (gbest) in the swarm and its own best position (pbest). However, all of particles' historical promising pbests in PSO are lost except their current pbests. In order to solve this problem, this paper proposes a novel composite PSO algorithm, called historical memory-based PSO (HMPSO), which uses an estimation of distribution algorithm to estimate and preserve the distribution information of particles' historical promising pbests. Each particle has three candidate positions, which are generated from the historical memory, particles' current pbests, and the swarm's gbest. Then the best candidate position is adopted. Experiments on 28 CEC2013 benchmark functions demonstrate the superiority of HMPSO over other algorithms.

  18. Stability of uncertain systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Blankenship, G. L.

    1971-01-01

    The asymptotic properties of feedback systems are discussed, containing uncertain parameters and subjected to stochastic perturbations. The approach is functional analytic in flavor and thereby avoids the use of Markov techniques and auxiliary Lyapunov functionals characteristic of the existing work in this area. The results are given for the probability distributions of the accessible signals in the system and are proved using the Prohorov theory of the convergence of measures. For general nonlinear systems, a result similar to the small loop-gain theorem of deterministic stability theory is given. Boundedness is a property of the induced distributions of the signals and not the usual notion of boundedness in norm. For the special class of feedback systems formed by the cascade of a white noise, a sector nonlinearity and convolution operator conditions are given to insure the total boundedness of the overall feedback system.

  19. A novel crystallization method for visualizing the membrane localization of potassium channels.

    PubMed Central

    Lopatin, A N; Makhina, E N; Nichols, C G

    1998-01-01

    The high permeability of K+ channels to monovalent thallium (Tl+) ions and the low solubility of thallium bromide salt were used to develop a simple yet very sensitive approach to the study of membrane localization of potassium channels. K+ channels (Kir1.1, Kir2.1, Kir2.3, Kv2.1), were expressed in Xenopus oocytes and loaded with Br ions by microinjection. Oocytes were then exposed to extracellular thallium. Under conditions favoring influx of Tl+ ions (negative membrane potential under voltage clamp, or high concentration of extracellular Tl+), crystals of TlBr, visible under low-power microscopy, formed under the membrane in places of high density of K+ channels. Crystals were not formed in uninjected oocytes, but were formed in oocytes expressing as little as 5 microS K+ conductance. The number of observed crystals was much lower than the estimated number of functional channels. Based on the pattern of crystal formation, K+ channels appear to be expressed mostly around the point of cRNA injection when injected either into the animal or vegetal hemisphere. In addition to this pseudopolarized distribution of K+ channels due to localized microinjection of cRNA, a naturally polarized (animal/vegetal side) distribution of K+ channels was also frequently observed when K+ channel cRNA was injected at the equator. A second novel "agarose-hemiclamp" technique was developed to permit direct measurements of K+ currents from different hemispheres of oocytes under two-microelectrode voltage clamp. This technique, together with direct patch-clamping of patches of membrane in regions of high crystal density, confirmed that the localization of TlBr crystals corresponded to the localization of functional K+ channels and suggested a clustered organization of functional channels. With appropriate permeant ion/counterion pairs, this approach may be applicable to the visualization of the membrane distribution of any functional ion channel. PMID:9591643

  20. A novel crystallization method for visualizing the membrane localization of potassium channels.

    PubMed

    Lopatin, A N; Makhina, E N; Nichols, C G

    1998-05-01

    The high permeability of K+ channels to monovalent thallium (Tl+) ions and the low solubility of thallium bromide salt were used to develop a simple yet very sensitive approach to the study of membrane localization of potassium channels. K+ channels (Kir1.1, Kir2.1, Kir2.3, Kv2.1), were expressed in Xenopus oocytes and loaded with Br ions by microinjection. Oocytes were then exposed to extracellular thallium. Under conditions favoring influx of Tl+ ions (negative membrane potential under voltage clamp, or high concentration of extracellular Tl+), crystals of TlBr, visible under low-power microscopy, formed under the membrane in places of high density of K+ channels. Crystals were not formed in uninjected oocytes, but were formed in oocytes expressing as little as 5 microS K+ conductance. The number of observed crystals was much lower than the estimated number of functional channels. Based on the pattern of crystal formation, K+ channels appear to be expressed mostly around the point of cRNA injection when injected either into the animal or vegetal hemisphere. In addition to this pseudopolarized distribution of K+ channels due to localized microinjection of cRNA, a naturally polarized (animal/vegetal side) distribution of K+ channels was also frequently observed when K+ channel cRNA was injected at the equator. A second novel "agarose-hemiclamp" technique was developed to permit direct measurements of K+ currents from different hemispheres of oocytes under two-microelectrode voltage clamp. This technique, together with direct patch-clamping of patches of membrane in regions of high crystal density, confirmed that the localization of TlBr crystals corresponded to the localization of functional K+ channels and suggested a clustered organization of functional channels. With appropriate permeant ion/counterion pairs, this approach may be applicable to the visualization of the membrane distribution of any functional ion channel.

  1. Unified gas-kinetic scheme with multigrid convergence for rarefied flow study

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2017-09-01

    The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.

  2. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  3. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  4. Solution of the finite Milne problem in stochastic media with RVT Technique

    NASA Astrophysics Data System (ADS)

    Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.

    2017-12-01

    This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.

  5. Spectral solver for multi-scale plasma physics simulations with dynamically adaptive number of moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec

    2015-06-01

    A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less

  6. Synthesis, crystal structure, vibrational spectra and theoretical calculations of quantum chemistry of a potential antimicrobial Meldrum's acid derivative

    NASA Astrophysics Data System (ADS)

    Campelo, M. J. M.; Freire, P. T. C.; Mendes Filho, J.; de Toledo, T. A.; Teixeira, A. M. R.; da Silva, L. E.; Bento, R. R. F.; Faria, J. L. B.; Pizani, P. S.; Gusmão, G. O. M.; Coutinho, H. D. M.; Oliveira, M. T. A.

    2017-10-01

    A new derivative of Meldrum's acid 5-((5-chloropyridin-2-ylamino)methylene)-2,2-dimethyl-1,3-dioxane-4,6-dione (CYMM) of molecular formula C12H11ClN2O4 was synthesized and structurally characterized using single crystal X-ray diffraction technique. The vibrational properties of the crystal were studied by Fourier Transform infrared (FT-IR), Fourier Transform Raman (FT-Raman) techniques and theoretical calculations of quantum chemistry using Density functional theory (DFT) and Density functional perturbation theory (DFPT). A comparison with experimental spectra allowed the assignment of all the normal modes. The descriptions of the normal modes were carried by means of potential energy distribution (PED). Additionally, analysis of the antimicrobial activity and antibiotic resistance modulatory activity was carried out to evaluate the antibacterial potential of the CYMM.

  7. Robust passivity analysis for discrete-time recurrent neural networks with mixed delays

    NASA Astrophysics Data System (ADS)

    Huang, Chuan-Kuei; Shu, Yu-Jeng; Chang, Koan-Yuh; Shou, Ho-Nien; Lu, Chien-Yu

    2015-02-01

    This article considers the robust passivity analysis for a class of discrete-time recurrent neural networks (DRNNs) with mixed time-delays and uncertain parameters. The mixed time-delays that consist of both the discrete time-varying and distributed time-delays in a given range are presented, and the uncertain parameters are norm-bounded. The activation functions are assumed to be globally Lipschitz continuous. Based on new bounding technique and appropriate type of Lyapunov functional, a sufficient condition is investigated to guarantee the existence of the desired robust passivity condition for the DRNNs, which can be derived in terms of a family of linear matrix inequality (LMI). Some free-weighting matrices are introduced to reduce the conservatism of the criterion by using the bounding technique. A numerical example is given to illustrate the effectiveness and applicability.

  8. deFUME: Dynamic exploration of functional metagenomic sequencing data.

    PubMed

    van der Helm, Eric; Geertz-Hansen, Henrik Marcus; Genee, Hans Jasper; Malla, Sailesh; Sommer, Morten Otto Alexander

    2015-07-31

    Functional metagenomic selections represent a powerful technique that is widely applied for identification of novel genes from complex metagenomic sources. However, whereas hundreds to thousands of clones can be easily generated and sequenced over a few days of experiments, analyzing the data is time consuming and constitutes a major bottleneck for experimental researchers in the field. Here we present the deFUME web server, an easy-to-use web-based interface for processing, annotation and visualization of functional metagenomics sequencing data, tailored to meet the requirements of non-bioinformaticians. The web-server integrates multiple analysis steps into one single workflow: read assembly, open reading frame prediction, and annotation with BLAST, InterPro and GO classifiers. Analysis results are visualized in an online dynamic web-interface. The deFUME webserver provides a fast track from raw sequence to a comprehensive visual data overview that facilitates effortless inspection of gene function, clustering and distribution. The webserver is available at cbs.dtu.dk/services/deFUME/and the source code is distributed at github.com/EvdH0/deFUME.

  9. Force-field functor theory: classical force-fields which reproduce equilibrium quantum distributions

    PubMed Central

    Babbush, Ryan; Parkhill, John; Aspuru-Guzik, Alán

    2013-01-01

    Feynman and Hibbs were the first to variationally determine an effective potential whose associated classical canonical ensemble approximates the exact quantum partition function. We examine the existence of a map between the local potential and an effective classical potential which matches the exact quantum equilibrium density and partition function. The usefulness of such a mapping rests in its ability to readily improve Born-Oppenheimer potentials for use with classical sampling. We show that such a map is unique and must exist. To explore the feasibility of using this result to improve classical molecular mechanics, we numerically produce a map from a library of randomly generated one-dimensional potential/effective potential pairs then evaluate its performance on independent test problems. We also apply the map to simulate liquid para-hydrogen, finding that the resulting radial pair distribution functions agree well with path integral Monte Carlo simulations. The surprising accessibility and transferability of the technique suggest a quantitative route to adapting Born-Oppenheimer potentials, with a motivation similar in spirit to the powerful ideas and approximations of density functional theory. PMID:24790954

  10. Efficient calculation of the energy of a molecule in an arbitrary electric field

    NASA Astrophysics Data System (ADS)

    Pulay, Peter; Janowski, Tomasz

    In thermodynamic (e.g., Monte Carlo) simulations with electronic embedding, the energy of the active site or solute must be calculated for millions of configurations of the environment (solvent or protein matrix) to obtain reliable statistics. This precludes the use of accurate but expensive ab initio and density functional techniques. Except for the immediate neighbors, the effect of the environment is electrostatic. We show that the energy of a molecule in the irregular field of the environment can be determined very efficiently by expanding the electric potential in known functions, and precalculating the first and second order response of the molecule to the components of the potential. These generalized multipole moments and polarizabilities allow the calculation of the energy of the system without further ab initio calculations. Several expansion functions were explored: polynomials, distributed inverse powers, and sine functions. The latter provide the numerically most stable fit but require new types of integrals. Distributed inverse powers can be simulated using dummy atoms, and energies calculated this way provide a very good approximation to the actual energies in the field of the environment.

  11. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  12. Structure and Dynamics of Quasi-Ordered Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eckert, J.; Redondo, A.; Henson, N.J.

    1999-07-09

    The functionality of many materials of both fundamental and technological interest is often critically dependent on the nature and extent of any disorder that may be present. In addition, it is often difficult to understand the nature of disorder in quite well ordered systems. There is therefore an urgent need to develop better tools, both experimental and computational, for the study of such quasi-ordered systems. To this end, the authors have used neutron diffraction studies in an attempt to locate small metal clusters or molecules randomly distributed inside microporous catalytic materials. Specifically, they have used pair distribution function (PDF) analysis,more » as well as inelastic neutron scattering (INS) spectroscopy, to study interactions between adsorbate molecules and a microporous matrix. They have interfaced these experimental studies with computations of PDF analysis as well as modeling of the dynamics of adsorbates. These techniques will be invaluable in elucidating the local structure and function of many of these classes of materials.« less

  13. Higher-order stochastic differential equations and the positive Wigner function

    NASA Astrophysics Data System (ADS)

    Drummond, P. D.

    2017-12-01

    General higher-order stochastic processes that correspond to any diffusion-type tensor of higher than second order are obtained. The relationship of multivariate higher-order stochastic differential equations with tensor decomposition theory and tensor rank is explained. Techniques for generating the requisite complex higher-order noise are proved to exist either using polar coordinates and γ distributions, or from products of Gaussian variates. This method is shown to allow the calculation of the dynamics of the Wigner function, after it is extended to a complex phase space. The results are illustrated physically through dynamical calculations of the positive Wigner distribution for three-mode parametric downconversion, widely used in quantum optics. The approach eliminates paradoxes arising from truncation of the higher derivative terms in Wigner function time evolution. Anomalous results of negative populations and vacuum scattering found in truncated Wigner quantum simulations in quantum optics and Bose-Einstein condensate dynamics are shown not to occur with this type of stochastic theory.

  14. Verification of Anderson Superexchange in MnO via Magnetic Pair Distribution Function Analysis and ab initio Theory

    NASA Astrophysics Data System (ADS)

    Frandsen, Benjamin A.; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.

    2016-05-01

    We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ˜1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.

  15. From nociception to pain perception: imaging the spinal and supraspinal pathways

    PubMed Central

    Brooks, Jonathan; Tracey, Irene

    2005-01-01

    Functional imaging techniques have allowed researchers to look within the brain, and revealed the cortical representation of pain. Initial experiments, performed in the early 1990s, revolutionized pain research, as they demonstrated that pain was not processed in a single cortical area, but in several distributed brain regions. Over the last decade, the roles of these pain centres have been investigated and a clearer picture has emerged of the medial and lateral pain system. In this brief article, we review the imaging literature to date that has allowed these advances to be made, and examine the new frontiers for pain imaging research: imaging the brainstem and other structures involved in the descending control of pain; functional and anatomical connectivity studies of pain processing brain regions; imaging models of neuropathic pain-like states; and going beyond the brain to image spinal function. The ultimate goal of such research is to take these new techniques into the clinic, to investigate and provide new remedies for chronic pain sufferers. PMID:16011543

  16. Integrated analysis of particle interactions at hadron colliders Report of research activities in 2010-2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nadolsky, Pavel M.

    2015-08-31

    The report summarizes research activities of the project ”Integrated analysis of particle interactions” at Southern Methodist University, funded by 2010 DOE Early Career Research Award DE-SC0003870. The goal of the project is to provide state-of-the-art predictions in quantum chromodynamics in order to achieve objectives of the LHC program for studies of electroweak symmetry breaking and new physics searches. We published 19 journal papers focusing on in-depth studies of proton structure and integration of advanced calculations from different areas of particle phenomenology: multi-loop calculations, accurate long-distance hadronic functions, and precise numerical programs. Methods for factorization of QCD cross sections were advancedmore » in order to develop new generations of CTEQ parton distribution functions (PDFs), CT10 and CT14. These distributions provide the core theoretical input for multi-loop perturbative calculations by LHC experimental collaborations. A novel ”PDF meta-analysis” technique was invented to streamline applications of PDFs in numerous LHC simulations and to combine PDFs from various groups using multivariate stochastic sampling of PDF parameters. The meta-analysis will help to bring the LHC perturbative calculations to the new level of accuracy, while reducing computational efforts. The work on parton distributions was complemented by development of advanced perturbative techniques to predict observables dependent on several momentum scales, including production of massive quarks and transverse momentum resummation at the next-to-next-to-leading order in QCD.« less

  17. Interpreting the Results of Weighted Least-Squares Regression: Caveats for the Statistical Consumer.

    ERIC Educational Resources Information Center

    Willett, John B.; Singer, Judith D.

    In research, data sets often occur in which the variance of the distribution of the dependent variable at given levels of the predictors is a function of the values of the predictors. In this situation, the use of weighted least-squares (WLS) or techniques is required. Weights suitable for use in a WLS regression analysis must be estimated. A…

  18. Droplet Sizing Research Program.

    DTIC Science & Technology

    1986-03-10

    of size and velocity distributions is needed. For example, fuel spray studies, aer- osol studies, flue gas desulfurization , spray drying, paint...techniques are presented chronologic- ally since there is a logical development as a function of time. Most of the significant technical accomplishments...U3U 0 0 ILI N signals with an apparently different size by using the following logic : droplets that produce a certain visibility are associated with a

  19. A method of using cluster analysis to study statistical dependence in multivariate data

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Card, D. H.; Lyle, G. C.

    1975-01-01

    A technique is presented that uses both cluster analysis and a Monte Carlo significance test of clusters to discover associations between variables in multidimensional data. The method is applied to an example of a noisy function in three-dimensional space, to a sample from a mixture of three bivariate normal distributions, and to the well-known Fisher's Iris data.

  20. Imaging pathologic pulmonary air and fluid accumulation by functional and absolute EIT.

    PubMed

    Hahn, G; Just, A; Dudykevych, T; Frerichs, I; Hinz, J; Quintel, M; Hellige, G

    2006-05-01

    The increasing use of EIT in clinical research on severely ill lung patients requires a clarification of the influence of pathologic impedance distributions on the validity of the resulting tomograms. Significant accumulation of low-conducting air (e.g. pneumothorax or emphysema) or well-conducting liquid (e.g. haematothorax or atelectases) may conflict with treating the imaging problem as purely linear. First, we investigated the influence of stepwise inflation and deflation by up to 300 ml of air and 300 ml of Ringer solution into the pleural space of five pigs on the resulting tomograms during ventilation at constant tidal volume. Series of EIT images representing relative impedance changes were generated on the basis of a modified Sheffield back projection algorithm and ventilation distribution was displayed as functional (f-EIT) tomograms. In addition, a modified simultaneous iterative reconstruction technique (SIRT) was applied to quantify the resistivity distribution on an absolute level scaled in Omega m (a-EIT). Second, we applied these two EIT techniques on four intensive care patients with inhomogeneous air and fluid distribution and compared the EIT results to computed tomography (CT) and to a reference set of intrathoracic resistivity data of 20 healthy volunteers calculated by SIRT. The results of the animal model show that f-EIT based on back projection is not disturbed by the artificial pneumo- or haematothorax. Application of SIRT allows reliable discrimination and detection of the location and amplitude of pneumo- or haematothorax. These results were supported by the good agreement between the electrical impedance tomograms and CT scans on patients and by the significant differences of regional resistivity data between patients and healthy volunteers.

  1. Application of new techniques in the calibration of the TROPOMI-SWIR instrument (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tol, Paul; van Hees, Richard; van Kempen, Tim; Krijger, Matthijs; Cadot, Sidney; Aben, Ilse; Ludewig, Antje; Dingjan, Jos; Persijn, Stefan; Hoogeveen, Ruud

    2016-10-01

    The Tropospheric Monitoring Instrument (TROPOMI) on-board the Sentinel-5 Precursor satellite is an Earth-observing spectrometer with bands in the ultraviolet, visible, near infrared and short-wave infrared (SWIR). It provides daily global coverage of atmospheric trace gases relevant for tropospheric air quality and climate research. Three new techniques will be presented that are unique for the TROPOMI-SWIR spectrometer. The retrieval of methane and CO columns from the data of the SWIR band requires for each detector pixel an accurate instrument spectral response function (ISRF), i.e. the normalized signal as a function of wavelength. A new determination method for Earth-observing instruments has been used in the on-ground calibration, based on measurements with a SWIR optical parametric oscillator (OPO) that was scanned over the whole TROPOMI-SWIR spectral range. The calibration algorithm derives the ISRF without needing the absolute wavelength during the measurement. The same OPO has also been used to determine the two-dimensional stray-light distribution for each SWIR pixel with a dynamic range of 7 orders. This was achieved by combining measurements at several exposure times and taking saturation into account. The correction algorithm and data are designed to remove the mean stray-light distribution and a reflection that moves relative to the direct image, within the strict constraints of the available time for the L01b processing. A third new technique is an alternative calibration of the SWIR absolute radiance and irradiance using a black body at the temperature of melting silver. Unlike a standard FEL lamp, this source does not have to be calibrated itself, because the temperature is very stable and well known. Measurement methods, data analyses, correction algorithms and limitations of the new techniques will be presented.

  2. Captured metagenomics: large-scale targeting of genes based on ‘sequence capture’ reveals functional diversity in soils

    PubMed Central

    Manoharan, Lokeshwaran; Kushwaha, Sandeep K.; Hedlund, Katarina; Ahrén, Dag

    2015-01-01

    Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances. PMID:26490729

  3. Real-Time Impact Visualization Inspection of Aerospace Composite Structures with Distributed Sensors.

    PubMed

    Si, Liang; Baier, Horst

    2015-07-08

    For the future design of smart aerospace structures, the development and application of a reliable, real-time and automatic monitoring and diagnostic technique is essential. Thus, with distributed sensor networks, a real-time automatic structural health monitoring (SHM) technique is designed and investigated to monitor and predict the locations and force magnitudes of unforeseen foreign impacts on composite structures and to estimate in real time mode the structural state when impacts occur. The proposed smart impact visualization inspection (IVI) technique mainly consists of five functional modules, which are the signal data preprocessing (SDP), the forward model generator (FMG), the impact positioning calculator (IPC), the inverse model operator (IMO) and structural state estimator (SSE). With regard to the verification of the practicality of the proposed IVI technique, various structure configurations are considered, which are a normal CFRP panel and another CFRP panel with "orange peel" surfaces and a cutout hole. Additionally, since robustness against several background disturbances is also an essential criterion for practical engineering demands, investigations and experimental tests are carried out under random vibration interfering noise (RVIN) conditions. The accuracy of the predictions for unknown impact events on composite structures using the IVI technique is validated under various structure configurations and under changing environmental conditions. The evaluated errors all fall well within a satisfactory limit range. Furthermore, it is concluded that the IVI technique is applicable for impact monitoring, diagnosis and assessment of aerospace composite structures in complex practical engineering environments.

  4. Real-Time Impact Visualization Inspection of Aerospace Composite Structures with Distributed Sensors

    PubMed Central

    Si, Liang; Baier, Horst

    2015-01-01

    For the future design of smart aerospace structures, the development and application of a reliable, real-time and automatic monitoring and diagnostic technique is essential. Thus, with distributed sensor networks, a real-time automatic structural health monitoring (SHM) technique is designed and investigated to monitor and predict the locations and force magnitudes of unforeseen foreign impacts on composite structures and to estimate in real time mode the structural state when impacts occur. The proposed smart impact visualization inspection (IVI) technique mainly consists of five functional modules, which are the signal data preprocessing (SDP), the forward model generator (FMG), the impact positioning calculator (IPC), the inverse model operator (IMO) and structural state estimator (SSE). With regard to the verification of the practicality of the proposed IVI technique, various structure configurations are considered, which are a normal CFRP panel and another CFRP panel with “orange peel” surfaces and a cutout hole. Additionally, since robustness against several background disturbances is also an essential criterion for practical engineering demands, investigations and experimental tests are carried out under random vibration interfering noise (RVIN) conditions. The accuracy of the predictions for unknown impact events on composite structures using the IVI technique is validated under various structure configurations and under changing environmental conditions. The evaluated errors all fall well within a satisfactory limit range. Furthermore, it is concluded that the IVI technique is applicable for impact monitoring, diagnosis and assessment of aerospace composite structures in complex practical engineering environments. PMID:26184196

  5. Skin blotting: a noninvasive technique for evaluating physiological skin status.

    PubMed

    Minematsu, Takeo; Horii, Motoko; Oe, Makoto; Sugama, Junko; Mugita, Yuko; Huang, Lijuan; Nakagami, Gojiro; Sanada, Hiromi

    2014-06-01

    The skin performs important structural and physiological functions, and skin assessment represents an important step in identifying skin problems. Although noninvasive techniques for assessing skin status exist, no such techniques for monitoring its physiological status are available. This study aimed to develop a novel skin-assessment technique known as skin blotting, based on the leakage of secreted proteins from inside the skin following overhydration in mice. The applicability of this technique was further investigated in a clinical setting. Skin blotting involves 2 steps: collecting proteins by attaching a damp nitrocellulose membrane to the surface of the skin, and immunostaining the collected proteins. The authors implanted fluorescein-conjugated dextran (F-DEX)-containing agarose gels into mice and detected the tissue distribution of F-DEX under different blotting conditions. They also analyzed the correlations between inflammatory cytokine secretion and leakage following ultraviolet irradiation in mice and in relation to body mass index in humans. The F-DEX in mice was distributed in the deeper and shallower layers of skin and leaked through the transfollicular and transepidermal routes, respectively. Ultraviolet irradiation induced tumor necrosis factor secretion in the epidermis in mice, which was detected by skin blotting, whereas follicular tumor necrosis factor was associated with body mass index in obese human subjects. These results support the applicability of skin blotting for skin assessment. Skin blotting represents a noninvasive technique for assessing skin physiology and has potential as a predictive and diagnostic tool for skin disorders.

  6. Mapping cell surface adhesion by rotation tracking and adhesion footprinting

    NASA Astrophysics Data System (ADS)

    Li, Isaac T. S.; Ha, Taekjip; Chemla, Yann R.

    2017-03-01

    Rolling adhesion, in which cells passively roll along surfaces under shear flow, is a critical process involved in inflammatory responses and cancer metastasis. Surface adhesion properties regulated by adhesion receptors and membrane tethers are critical in understanding cell rolling behavior. Locally, adhesion molecules are distributed at the tips of membrane tethers. However, how functional adhesion properties are globally distributed on the individual cell’s surface is unknown. Here, we developed a label-free technique to determine the spatial distribution of adhesive properties on rolling cell surfaces. Using dark-field imaging and particle tracking, we extract the rotational motion of individual rolling cells. The rotational information allows us to construct an adhesion map along the contact circumference of a single cell. To complement this approach, we also developed a fluorescent adhesion footprint assay to record the molecular adhesion events from cell rolling. Applying the combination of the two methods on human promyelocytic leukemia cells, our results surprisingly reveal that adhesion is non-uniformly distributed in patches on the cell surfaces. Our label-free adhesion mapping methods are applicable to the variety of cell types that undergo rolling adhesion and provide a quantitative picture of cell surface adhesion at the functional and molecular level.

  7. Resilience-based optimal design of water distribution network

    NASA Astrophysics Data System (ADS)

    Suribabu, C. R.

    2017-11-01

    Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.

  8. Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; Shah, Ashwin R.

    1997-01-01

    The properties of ceramic matrix composites (CMC's) are known to display a considerable amount of scatter due to variations in fiber/matrix properties, interphase properties, interphase bonding, amount of matrix voids, and many geometry- or fabrication-related parameters, such as ply thickness and ply orientation. This paper summarizes preliminary studies in which formal probabilistic descriptions of the material-behavior- and fabrication-related parameters were incorporated into micromechanics and macromechanics for CMC'S. In this process two existing methodologies, namely CMC micromechanics and macromechanics analysis and a fast probability integration (FPI) technique are synergistically coupled to obtain the probabilistic composite behavior or response. Preliminary results in the form of cumulative probability distributions and information on the probability sensitivities of the response to primitive variables for a unidirectional silicon carbide/reaction-bonded silicon nitride (SiC/RBSN) CMC are presented. The cumulative distribution functions are computed for composite moduli, thermal expansion coefficients, thermal conductivities, and longitudinal tensile strength at room temperature. The variations in the constituent properties that directly affect these composite properties are accounted for via assumed probabilistic distributions. Collectively, the results show that the present technique provides valuable information about the composite properties and sensitivity factors, which is useful to design or test engineers. Furthermore, the present methodology is computationally more efficient than a standard Monte-Carlo simulation technique; and the agreement between the two solutions is excellent, as shown via select examples.

  9. Preparation of gold nanoparticles and determination of their particles size via different methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iqbal, Muhammad; Usanase, Gisele; Oulmi, Kafia

    Graphical abstract: Preparation of gold nanoparticles via NaBH{sub 4} reduction method, and determination of their particle size, size distribution and morphology by using different techniques. - Highlights: • Gold nanoparticles were synthesized by NaBH{sub 4} reduction method. • Excess of reducing agent leads to tendency of aggregation. • The particle size, size distribution and morphology were investigated. • Particle size was determined both experimentally as well as theoretically. - Abstract: Gold nanoparticles have been used in various applications covering both electronics, biosensors, in vivo biomedical imaging and in vitro biomedical diagnosis. As a general requirement, gold nanoparticles should be preparedmore » in large scale, easy to be functionalized by chemical compound of by specific ligands or biomolecules. In this study, gold nanoparticles were prepared by using different concentrations of reducing agent (NaBH{sub 4}) in various formulations and their effect on the particle size, size distribution and morphology was investigated. Moreover, special attention has been dedicated to comparison of particles size measured by various techniques, such as, light scattering, transmission electron microscopy, UV spectrum using standard curve and particles size calculated by using Mie theory and UV spectrum of gold nanoparticles dispersion. Particle size determined by various techniques can be correlated for monodispersed particles and excess of reducing agent leads to increase in the particle size.« less

  10. Nuclear Ensemble Approach with Importance Sampling.

    PubMed

    Kossoski, Fábris; Barbatti, Mario

    2018-06-12

    We show that the importance sampling technique can effectively augment the range of problems where the nuclear ensemble approach can be applied. A sampling probability distribution function initially determines the collection of initial conditions for which calculations are performed, as usual. Then, results for a distinct target distribution are computed by introducing compensating importance sampling weights for each sampled point. This mapping between the two probability distributions can be performed whenever they are both explicitly constructed. Perhaps most notably, this procedure allows for the computation of temperature dependent observables. As a test case, we investigated the UV absorption spectra of phenol, which has been shown to have a marked temperature dependence. Application of the proposed technique to a range that covers 500 K provides results that converge to those obtained with conventional sampling. We further show that an overall improved rate of convergence is obtained when sampling is performed at intermediate temperatures. The comparison between calculated and the available measured cross sections is very satisfactory, as the main features of the spectra are correctly reproduced. As a second test case, one of Tully's classical models was revisited, and we show that the computation of dynamical observables also profits from the importance sampling technique. In summary, the strategy developed here can be employed to assess the role of temperature for any property calculated within the nuclear ensemble method, with the same computational cost as doing so for a single temperature.

  11. Models for randomly distributed nanoscopic domains on spherical vesicles

    NASA Astrophysics Data System (ADS)

    Anghel, Vinicius N. P.; Bolmatov, Dima; Katsaras, John

    2018-06-01

    The existence of lipid domains in the plasma membrane of biological systems has proven controversial, primarily due to their nanoscopic size—a length scale difficult to interrogate with most commonly used experimental techniques. Scattering techniques have recently proven capable of studying nanoscopic lipid domains populating spherical vesicles. However, the development of analytical methods able of predicting and analyzing domain pair correlations from such experiments has not kept pace. Here, we developed models for the random distribution of monodisperse, circular nanoscopic domains averaged on the surface of a spherical vesicle. Specifically, the models take into account (i) intradomain correlations corresponding to form factors and interdomain correlations corresponding to pair distribution functions, and (ii) the analytical computation of interdomain correlations for cases of two and three domains on a spherical vesicle. In the case of more than three domains, these correlations are treated either by Monte Carlo simulations or by spherical analogs of the Ornstein-Zernike and Percus-Yevick (PY) equations. Importantly, the spherical analog of the PY equation works best in the case of nanoscopic size domains, a length scale that is mostly inaccessible by experimental approaches such as, for example, fluorescent techniques and optical microscopies. The analytical form factors and structure factors of nanoscopic domains populating a spherical vesicle provide a new and important framework for the quantitative analysis of experimental data from commonly studied phase-separated vesicles used in a wide range of biophysical studies.

  12. Efficient 3D inversions using the Richards equation

    NASA Astrophysics Data System (ADS)

    Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad

    2018-07-01

    Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.

  13. Optical Correlation Techniques In Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Schatzel, K.; Schulz-DuBois, E. O.; Vehrenkamp, R.

    1981-05-01

    Three flow measurement techniques make use of fast digital correlators. (1) Most widely spread is photon correlation velocimetry using crossed laser beams and detecting Doppler shifted light scattered by small particles in the flow. Depending on the processing of the photon correlogram, this technique yields mean velocity, turbulence level, or even the detailed probability distribution of one velocity component. An improved data processing scheme is demonstrated on laminar vortex flow in a curved channel. (2) Rate correlation based upon threshold crossings of a high pass filtered laser Doppler signal can he used to obtain velocity correlation functions. The most powerful setup developed in our laboratory uses a phase locked loop type tracker and a multibit correlator to analyse time-dependent Taylor vortex flow. With two optical systems and trackers, crosscorrelation functions reveal phase relations between different vortices. (3) Making use of refractive index fluctuations (e. g. in two phase flows) instead of scattering particles, interferometry with bidirectional fringe counting and digital correlation and probability analysis constitute a new quantitative technique related to classical Schlieren methods. Measurements on a mixing flow of heated and cold air contribute new ideas to the theory of turbulent random phase screens.

  14. Optical correlation techniques in fluid dynamics

    NASA Astrophysics Data System (ADS)

    Schätzel, K.; Schulz-Dubois, E. O.; Vehrenkamp, R.

    1981-04-01

    Three flow measurement techniques make use of fast digital correlators. The most widely spread is photon correlation velocimetry using crossed laser beams, and detecting Doppler shifted light scattered by small particles in the flow. Depending on the processing of the photon correlation output, this technique yields mean velocity, turbulence level, and even the detailed probability distribution of one velocity component. An improved data processing scheme is demonstrated on laminar vortex flow in a curved channel. In the second method, rate correlation based upon threshold crossings of a high pass filtered laser Doppler signal can be used to obtain velocity correlation functions. The most powerful set-up developed in our laboratory uses a phase locked loop type tracker and a multibit correlator to analyze time-dependent Taylor vortex flow. With two optical systems and trackers, cross-correlation functions reveal phase relations between different vortices. The last method makes use of refractive index fluctuations (eg in two phase flows) instead of scattering particles. Interferometry with bidirectional counting, and digital correlation and probability analysis, constitutes a new quantitative technique related to classical Schlieren methods. Measurements on a mixing flow of heated and cold air contribute new ideas to the theory of turbulent random phase screens.

  15. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  16. Optical image encryption using triplet of functions

    NASA Astrophysics Data System (ADS)

    Yatish; Fatima, Areeba; Nishchal, Naveen Kumar

    2018-03-01

    We propose an image encryption scheme that brings into play a technique using a triplet of functions to manipulate complex-valued functions. Optical cryptosystems using this method are an easier approach toward the ciphertext generation that avoids the use of holographic setup to record phase. The features of this method were shown in the context of double random phase encoding and phase-truncated Fourier transform-based cryptosystems using gyrator transform. In the first step, the complex function is split into two matrices. These matrices are separated, so they contain the real and imaginary parts. In the next step, these two matrices and a random distribution function are acted upon by one of the functions in the triplet. During decryption, the other two functions in the triplet help us retrieve the complex-valued function. The simulation results demonstrate the effectiveness of the proposed idea. To check the robustness of the proposed scheme, attack analyses were carried out.

  17. Towards full waveform ambient noise inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas

    2018-01-01

    In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure are quantified using Hessian-vector products.

  18. Functional brain networks develop from a "local to distributed" organization.

    PubMed

    Fair, Damien A; Cohen, Alexander L; Power, Jonathan D; Dosenbach, Nico U F; Church, Jessica A; Miezin, Francis M; Schlaggar, Bradley L; Petersen, Steven E

    2009-05-01

    The mature human brain is organized into a collection of specialized functional networks that flexibly interact to support various cognitive functions. Studies of development often attempt to identify the organizing principles that guide the maturation of these functional networks. In this report, we combine resting state functional connectivity MRI (rs-fcMRI), graph analysis, community detection, and spring-embedding visualization techniques to analyze four separate networks defined in earlier studies. As we have previously reported, we find, across development, a trend toward 'segregation' (a general decrease in correlation strength) between regions close in anatomical space and 'integration' (an increased correlation strength) between selected regions distant in space. The generalization of these earlier trends across multiple networks suggests that this is a general developmental principle for changes in functional connectivity that would extend to large-scale graph theoretic analyses of large-scale brain networks. Communities in children are predominantly arranged by anatomical proximity, while communities in adults predominantly reflect functional relationships, as defined from adult fMRI studies. In sum, over development, the organization of multiple functional networks shifts from a local anatomical emphasis in children to a more "distributed" architecture in young adults. We argue that this "local to distributed" developmental characterization has important implications for understanding the development of neural systems underlying cognition. Further, graph metrics (e.g., clustering coefficients and average path lengths) are similar in child and adult graphs, with both showing "small-world"-like properties, while community detection by modularity optimization reveals stable communities within the graphs that are clearly different between young children and young adults. These observations suggest that early school age children and adults both have relatively efficient systems that may solve similar information processing problems in divergent ways.

  19. Comparative studies of brain activation with MEG and functional MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, J.S.; Aine, C.J.; Sanders, J.A.

    The past two years have witnessed the emergence of MRI as a functional imaging methodology. Initial demonstrations involved the injection of a paramagnetic contrast agent and required ultrafast echo planar imaging capability to adequately resolve the passage of the injected bolus. By measuring the local reduction in image intensity due to magnetic susceptibility, it was possible to calculate blood volume, which changes as a function of neural activation. Later developments have exploited endogenous contrast mechanisms to monitor changes in blood volume or in venous blood oxygen content. Recently, we and others have demonstrated that it is possible to make suchmore » measurements in a clinical imager, suggesting that the large installed base of such machines might be utilized for functional imaging. Although it is likely that functional MRI (fMRI) will subsume some of the clinical and basic neuroscience applications now touted for MEG, it is also clear that these techniques offer different largely complementary, capabilities. At the very least, it is useful to compare and cross-validate the activation maps produced by these techniques. Such studies will be valuable as a check on results of neuromagnetic distributed current reconstructions and will allow better characterization of the relationship between neurophysiological activation and associated hemodynamic changes. A more exciting prospect is the development of analyses that combine information from the two modalities to produce a better description of underlying neural activity than is possible with either technique in isolation. In this paper we describe some results from initial comparative studies and outline several techniques that can be used to treat MEG and fMRI data within a unified computational framework.« less

  20. The Lunar Rock Size Frequency Distribution from Diviner Infrared Measurements

    NASA Astrophysics Data System (ADS)

    Elder, C. M.; Hayne, P. O.; Piqueux, S.; Bandfield, J.; Williams, J. P.; Ghent, R. R.; Paige, D. A.

    2016-12-01

    Knowledge of the rock size frequency distribution on a planetary body is important for understanding its geologic history and for selecting landing sites. The rock size frequency distribution can be estimated by counting rocks in high resolution images, but most bodies in the solar system have limited areas with adequate coverage. We propose an alternative method to derive and map rock size frequency distributions using multispectral thermal infrared data acquired at multiple times during the night. We demonstrate this new technique for the Moon using data from the Lunar Reconnaissance Orbiter (LRO) Diviner radiometer in conjunction with three dimensional thermal modeling, leveraging the differential cooling rates of different rock sizes. We assume an exponential rock size frequency distribution, which has been shown to yield a good fit to rock populations in various locations on the Moon, Mars, and Earth [2, 3] and solve for the best radiance fits as a function of local time and wavelength. This method presents several advantages: 1) unlike other thermally derived rock abundance techniques, it is sensitive to rocks smaller than the diurnal skin depth; 2) it does not result in apparent decrease in rock abundance at night; and 3) it can be validated using images taken at the lunar surface. This method yields both the fraction of the surface covered in rocks of all sizes and the exponential factor, which defines the rate of drop-off in the exponential function at large rock sizes. We will present maps of both these parameters for the Moon, and provide a geological interpretation. In particular, this method reveals rocks in the lunar highlands that are smaller than previous thermal methods could detect. [1] Bandfield J. L. et al. (2011) JGR, 116, E00H02. [2] Golombek and Rapp (1997) JGR, 102, E2, 4117-4129. [3] Cintala, M.J. and K.M. McBride (1995) NASA Technical Memorandum 104804.

  1. Establishing Functional Relationships between Abiotic Environment, Macrophyte Coverage, Resource Gradients and the Distribution of Mytilus trossulus in a Brackish Non-Tidal Environment.

    PubMed

    Kotta, Jonne; Oganjan, Katarina; Lauringson, Velda; Pärnoja, Merli; Kaasik, Ants; Rohtla, Liisa; Kotta, Ilmar; Orav-Kotta, Helen

    2015-01-01

    Benthic suspension feeding mussels are an important functional guild in coastal and estuarine ecosystems. To date we lack information on how various environmental gradients and biotic interactions separately and interactively shape the distribution patterns of mussels in non-tidal environments. Opposing to tidal environments, mussels inhabit solely subtidal zone in non-tidal waterbodies and, thereby, driving factors for mussel populations are expected to differ from the tidal areas. In the present study, we used the boosted regression tree modelling (BRT), an ensemble method for statistical techniques and machine learning, in order to explain the distribution and biomass of the suspension feeding mussel Mytilus trossulus in the non-tidal Baltic Sea. BRT models suggested that (1) distribution patterns of M. trossulus are largely driven by separate effects of direct environmental gradients and partly by interactive effects of resource gradients with direct environmental gradients. (2) Within its suitable habitat range, however, resource gradients had an important role in shaping the biomass distribution of M. trossulus. (3) Contrary to tidal areas, mussels were not competitively superior over macrophytes with patterns indicating either facilitative interactions between mussels and macrophytes or co-variance due to common stressor. To conclude, direct environmental gradients seem to define the distribution pattern of M. trossulus, and within the favourable distribution range, resource gradients in interaction with direct environmental gradients are expected to set the biomass level of mussels.

  2. Establishing Functional Relationships between Abiotic Environment, Macrophyte Coverage, Resource Gradients and the Distribution of Mytilus trossulus in a Brackish Non-Tidal Environment

    PubMed Central

    Kotta, Jonne; Oganjan, Katarina; Lauringson, Velda; Pärnoja, Merli; Kaasik, Ants; Rohtla, Liisa; Kotta, Ilmar; Orav-Kotta, Helen

    2015-01-01

    Benthic suspension feeding mussels are an important functional guild in coastal and estuarine ecosystems. To date we lack information on how various environmental gradients and biotic interactions separately and interactively shape the distribution patterns of mussels in non-tidal environments. Opposing to tidal environments, mussels inhabit solely subtidal zone in non-tidal waterbodies and, thereby, driving factors for mussel populations are expected to differ from the tidal areas. In the present study, we used the boosted regression tree modelling (BRT), an ensemble method for statistical techniques and machine learning, in order to explain the distribution and biomass of the suspension feeding mussel Mytilus trossulus in the non-tidal Baltic Sea. BRT models suggested that (1) distribution patterns of M. trossulus are largely driven by separate effects of direct environmental gradients and partly by interactive effects of resource gradients with direct environmental gradients. (2) Within its suitable habitat range, however, resource gradients had an important role in shaping the biomass distribution of M. trossulus. (3) Contrary to tidal areas, mussels were not competitively superior over macrophytes with patterns indicating either facilitative interactions between mussels and macrophytes or co-variance due to common stressor. To conclude, direct environmental gradients seem to define the distribution pattern of M. trossulus, and within the favourable distribution range, resource gradients in interaction with direct environmental gradients are expected to set the biomass level of mussels. PMID:26317668

  3. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network

    PubMed Central

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701

  4. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.

    PubMed

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.

  5. Geoscience Applications of Synchrotron X-ray Computed Microtomography

    NASA Astrophysics Data System (ADS)

    Rivers, M. L.

    2009-05-01

    Computed microtomography is the extension to micron spatial resolution of the CAT scanning technique developed for medical imaging. Synchrotron sources are ideal for the method, since they provide a monochromatic, parallel beam with high intensity. High energy storage rings such as the Advanced Photon Source at Argonne National Laboratory produce x-rays with high energy, high brilliance, and high coherence. All of these factors combine to produce an extremely powerful imaging tool for earth science research. Techniques that have been developed include: - Absorption and phase contrast computed tomography with spatial resolution approaching one micron - Differential contrast computed tomography, imaging above and below the absorption edge of a particular element - High-pressure tomography, imaging inside a pressure cell at pressures above 10GPa - High speed radiography, with 100 microsecond temporal resolution - Fluorescence tomography, imaging the 3-D distribution of elements present at ppm concentrations. - Radiographic strain measurements during deformation at high confining pressure, combined with precise x- ray diffraction measurements to determine stress. These techniques have been applied to important problems in earth and environmental sciences, including: - The 3-D distribution of aqueous and organic liquids in porous media, with applications in contaminated groundwater and petroleum recovery. - The kinetics of bubble formation in magma chambers, which control explosive volcanism. - Accurate crystal size distributions in volcanic systems, important for understanding the evolution of magma chambers. - The equation-of-state of amorphous materials at high pressure using both direct measurements of volume as a function of pressure and also by measuring the change x-ray absorption coefficient as a function of pressure. - The formation of frost flowers on Arctic sea-ice, which is important in controlling the atmospheric chemistry of mercury. - The distribution of cracks in rocks at potential nuclear waste repositories. - The location and chemical speciation of toxic elements such as arsenic and nickel in soils and in plant tissues in contaminated Superfund sites. - The strength of earth materials under the pressure and temperature conditions of the Earth's mantle, providing insights into plate tectonics and the generation of earthquakes.

  6. The Density Functional Theory of Flies: Predicting distributions of interacting active organisms

    NASA Astrophysics Data System (ADS)

    Kinkhabwala, Yunus; Valderrama, Juan; Cohen, Itai; Arias, Tomas

    On October 2nd, 2016, 52 people were crushed in a stampede when a crowd panicked at a religious gathering in Ethiopia. The ability to predict the state of a crowd and whether it is susceptible to such transitions could help prevent such catastrophes. While current techniques such as agent based models can predict transitions in emergent behaviors of crowds, the assumptions used to describe the agents are often ad hoc and the simulations are computationally expensive making their application to real-time crowd prediction challenging. Here, we pursue an orthogonal approach and ask whether a reduced set of variables, such as the local densities, are sufficient to describe the state of a crowd. Inspired by the theoretical framework of Density Functional Theory, we have developed a system that uses only measurements of local densities to extract two independent crowd behavior functions: (1) preferences for locations and (2) interactions between individuals. With these two functions, we have accurately predicted how a model system of walking Drosophila melanogaster distributes itself in an arbitrary 2D environment. In addition, this density-based approach measures properties of the crowd from only observations of the crowd itself without any knowledge of the detailed interactions and thus it can make predictions about the resulting distributions of these flies in arbitrary environments, in real-time. This research was supported in part by ARO W911NF-16-1-0433.

  7. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted

  8. Langmuir probe analysis in electronegative plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bredin, Jerome, E-mail: jerome.bredin@lpp.polytechnique.fr; Chabert, Pascal; Aanesland, Ane

    2014-12-15

    This paper compares two methods to analyze Langmuir probe data obtained in electronegative plasmas. The techniques are developed to allow investigations in plasmas, where the electronegativity α{sub 0} = n{sub –}/n{sub e} (the ratio between the negative ion and electron densities) varies strongly. The first technique uses an analytical model to express the Langmuir probe current-voltage (I-V) characteristic and its second derivative as a function of the electron and ion densities (n{sub e}, n{sub +}, n{sub –}), temperatures (T{sub e}, T{sub +}, T{sub –}), and masses (m{sub e}, m{sub +}, m{sub –}). The analytical curves are fitted to the experimental data bymore » adjusting these variables and parameters. To reduce the number of fitted parameters, the ion masses are assumed constant within the source volume, and quasi-neutrality is assumed everywhere. In this theory, Maxwellian distributions are assumed for all charged species. We show that this data analysis can predict the various plasma parameters within 5–10%, including the ion temperatures when α{sub 0} > 100. However, the method is tedious, time consuming, and requires a precise measurement of the energy distribution function. A second technique is therefore developed for easier access to the electron and ion densities, but does not give access to the ion temperatures. Here, only the measured I-V characteristic is needed. The electron density, temperature, and ion saturation current for positive ions are determined by classical probe techniques. The electronegativity α{sub 0} and the ion densities are deduced via an iterative method since these variables are coupled via the modified Bohm velocity. For both techniques, a Child-Law sheath model for cylindrical probes has been developed and is presented to emphasize the importance of this model for small cylindrical Langmuir probes.« less

  9. Time-resolved ion velocity distribution in a cylindrical Hall thruster: heterodyne-based experiment and modeling.

    PubMed

    Diallo, A; Keller, S; Shi, Y; Raitses, Y; Mazouffre, S

    2015-03-01

    Time-resolved variations of the ion velocity distribution function (IVDF) are measured in the cylindrical Hall thruster using a novel heterodyne method based on the laser-induced fluorescence technique. This method consists in inducing modulations of the discharge plasma at frequencies that enable the coupling to the breathing mode. Using a harmonic decomposition of the IVDF, one can extract each harmonic component of the IVDF from which the time-resolved IVDF is reconstructed. In addition, simulations have been performed assuming a sloshing of the IVDF during the modulation that show agreement between the simulated and measured first order perturbation of the IVDF.

  10. Optical bending sensor using distributed feedback solid state dye lasers on optical fiber.

    PubMed

    Kubota, Hiroyuki; Oomi, Soichiro; Yoshioka, Hiroaki; Watanabe, Hirofumi; Oki, Yuji

    2012-07-02

    Novel type of optical fiber sensor was proposed and demonstrated. The print-like fabrication technique fabricates multiple distributed feedback solid state dye lasers on a polymeric optical fiber (POF) with tapered coupling. This multi-active-sidecore structure was easily fabricated and provides multiple functions. Mounting the lasers on the same point of a multimode POF demonstrated a bending radius sensitivity of 20 m without any supports. Two axis directional sensing without cross talk was also confirmed. A more complicated mounting formation can demonstrate a twisted POF. The temperature property of the sensor was also studied, and elimination of the temperature influence was experimentally attained.

  11. A Nakanishi-based model illustrating the covariant extension of the pion GPD overlap representation and its ambiguities

    NASA Astrophysics Data System (ADS)

    Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2018-05-01

    A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.

  12. Optical processing for future computer networks

    NASA Technical Reports Server (NTRS)

    Husain, A.; Haugen, P. R.; Hutcheson, L. D.; Warrior, J.; Murray, N.; Beatty, M.

    1986-01-01

    In the development of future data management systems, such as the NASA Space Station, a major problem represents the design and implementation of a high performance communication network which is self-correcting and repairing, flexible, and evolvable. To obtain the goal of designing such a network, it will be essential to incorporate distributed adaptive network control techniques. The present paper provides an outline of the functional and communication network requirements for the Space Station data management system. Attention is given to the mathematical representation of the operations being carried out to provide the required functionality at each layer of communication protocol on the model. The possible implementation of specific communication functions in optics is also considered.

  13. Direct observation of CD4 T cell morphologies and their cross-sectional traction force derivation on quartz nanopillar substrates using focused ion beam technique

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Joo; Kim, Gil-Sung; Hyung, Jung-Hwan; Lee, Won-Yong; Hong, Chang-Hee; Lee, Sang-Kwon

    2013-07-01

    Direct observations of the primary mouse CD4 T cell morphologies, e.g., cell adhesion and cell spreading by culturing CD4 T cells in a short period of incubation (e.g., 20 min) on streptavidin-functionalized quartz nanopillar arrays (QNPA) using a high-content scanning electron microscopy method were reported. Furthermore, we first demonstrated cross-sectional cell traction force distribution of surface-bound CD4 T cells on QNPA substrates by culturing the cells on top of the QNPA and further analysis in deflection of underlying QNPA via focused ion beam-assisted technique.

  14. Distributed Humidity Sensing in PMMA Optical Fibers at 500 nm and 650 nm Wavelengths.

    PubMed

    Liehr, Sascha; Breithaupt, Mathias; Krebber, Katerina

    2017-03-31

    Distributed measurement of humidity is a sought-after capability for various fields of application, especially in the civil engineering and structural health monitoring sectors. This article presents a method for distributed humidity sensing along polymethyl methacrylate (PMMA) polymer optical fibers (POFs) by analyzing wavelength-dependent Rayleigh backscattering and attenuation characteristics at 500 nm and 650 nm wavelengths. Spatially resolved humidity sensing is obtained from backscatter traces of a dual-wavelength optical time domain reflectometer (OTDR). Backscatter dependence, attenuation dependence as well as the fiber length change are characterized as functions of relative humidity. Cross-sensitivity effects are discussed and quantified. The evaluation of the humidity-dependent backscatter effects at the two wavelength measurements allows for distributed and unambiguous measurement of relative humidity. The technique can be readily employed with low-cost standard polymer optical fibers and commercial OTDR devices.

  15. A mesoscopic simulation on distributions of red blood cells in a bifurcating channel

    NASA Astrophysics Data System (ADS)

    Inoue, Yasuhiro; Takagi, Shu; Matsumoto, Yoichiro

    2004-11-01

    Transports of red blood cells (RBCs) or particles in bifurcated channels have been attracting renewed interest since the advent of concepts of MEMS for sorting, analyzing, and removing cells or particles from sample medium. In this talk, we present a result on a transport of red blood cells (RBCs) in a bifurcating channel studied by using a mesoscale simulation technique of immiscible droplets, where RBCs have been modeled as immiscible droplets. The distribution of RBCs is represented by the fractional RBC flux into two daughters as a function of volumetric flow ratio between the daughters. The data obtained in our simulations are examined with a theoretical prediction, in which, we assume an exponential distribution for positions of RBCs in the mother channel. The theoretical predictions show a good agreement with simulation results. A non-uniform distribution of RBCs in the mother channel affects disproportional separation of RBC flux at a bifurcation.

  16. Single-molecule diffusometry reveals the nucleotide-dependent oligomerization pathways of Nicotiana tabacum Rubisco activase

    NASA Astrophysics Data System (ADS)

    Wang, Quan; Serban, Andrew J.; Wachter, Rebekka M.; Moerner, W. E.

    2018-03-01

    Oligomerization plays an important role in the function of many proteins, but a quantitative picture of the oligomer distribution has been difficult to obtain using existing techniques. Here we describe a method that combines sub-stoichiometric labeling and recently developed single-molecule diffusometry to measure the size distribution of oligomers under equilibrium conditions in solution, one molecule at a time. We use this technique to characterize the oligomerization behavior of Nicotiana tabacum (Nt) Rubisco activase (Nt-Rca), a chaperone-like AAA-plus ATPase essential in regulating carbon fixation during photosynthesis. We directly observed monomers, dimers, and a tetramer/hexamer mixture and extracted their fractional abundance as a function of protein concentration. We show that the oligomerization pathway of Nt-Rca is nucleotide dependent: ATPγS binding strongly promotes tetramer/hexamer formation from dimers and results in a preferred tetramer/hexamer population for concentrations in the 1-10 μM range. Furthermore, we directly observed dynamic assembly and disassembly processes of single complexes in real time and from there estimated the rate of subunit exchange to be ˜0.1 s-1 with ATPγS. On the other hand, ADP binding destabilizes Rca complexes by enhancing the rate of subunit exchange by >2 fold. These observations provide a quantitative starting point to elucidate the structure-function relations of Nt-Rca complexes. We envision the method to fill a critical gap in defining and quantifying protein assembly pathways in the small-oligomer regime.

  17. Single-molecule diffusometry reveals the nucleotide-dependent oligomerization pathways of Nicotiana tabacum Rubisco activase.

    PubMed

    Wang, Quan; Serban, Andrew J; Wachter, Rebekka M; Moerner, W E

    2018-03-28

    Oligomerization plays an important role in the function of many proteins, but a quantitative picture of the oligomer distribution has been difficult to obtain using existing techniques. Here we describe a method that combines sub-stoichiometric labeling and recently developed single-molecule diffusometry to measure the size distribution of oligomers under equilibrium conditions in solution, one molecule at a time. We use this technique to characterize the oligomerization behavior of Nicotiana tabacum (Nt) Rubisco activase (Nt-Rca), a chaperone-like AAA-plus ATPase essential in regulating carbon fixation during photosynthesis. We directly observed monomers, dimers, and a tetramer/hexamer mixture and extracted their fractional abundance as a function of protein concentration. We show that the oligomerization pathway of Nt-Rca is nucleotide dependent: ATPγS binding strongly promotes tetramer/hexamer formation from dimers and results in a preferred tetramer/hexamer population for concentrations in the 1-10 μM range. Furthermore, we directly observed dynamic assembly and disassembly processes of single complexes in real time and from there estimated the rate of subunit exchange to be ∼0.1 s -1 with ATPγS. On the other hand, ADP binding destabilizes Rca complexes by enhancing the rate of subunit exchange by >2 fold. These observations provide a quantitative starting point to elucidate the structure-function relations of Nt-Rca complexes. We envision the method to fill a critical gap in defining and quantifying protein assembly pathways in the small-oligomer regime.

  18. Application Of The Wigner-Ville Distribution To The Identification Of Machine Noise

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; O'Shea, Peter

    1988-02-01

    The theory of signal detection using the Wigner-Ville Distribution (WVD) and the Cross Wagner-Ville Distribution (XWVD) is reviewed, and applied to the signaturing, detection, and identification of some specific machine sounds - the individual cylinder firings of a marine engine. For this task, a 4 step procedure has been devised. The Autocorrelation Function (ACF) is first employed for ascertaining the number of engine cylinders and the firing rate of the engine. Cross-correlation techniques are then used for detecting the occurrence of cylinder firing events. This is followed by the use WVD and XWVD based analyses to produce high resolution Time-Frequency signatures, and finally 2D correlations are employed for identification of the cylinders. The proposed methodology is applied to real data.

  19. Investigating the Luminous Environment of SDSS Data Release 4 Mg II Absorption Line Systems

    NASA Astrophysics Data System (ADS)

    Caler, Michelle A.; Ravi, Sheth K.

    2018-01-01

    We investigate the luminous environment within a few hundred kiloparsecs of 3760 Mg II absorption line systems. These systems lie along 3760 lines of sight to Sloan Digital Sky Survey (SDSS) Data Release 4 QSOs, have redshifts that range between 0.37 ≤ z ≤ 0.82, and have rest equivalent widths greater than 0.18 Å. We use the SDSS Catalog Archive Server to identify galaxies projected near 3 arcminutes of the absorbing QSO’s position, and a background subtraction technique to estimate the absolute magnitude distribution and luminosity function of galaxies physically associated with these Mg II absorption line systems. The Mg II absorption system sample is split into two parts, with the split occurring at rest equivalent width 0.8 Å, and the resulting absolute magnitude distributions and luminosity functions compared on scales ranging from 50 h-1 kpc to 880 h-1 kpc. We find that, on scales of 100 h-1 kpc and smaller, the two distributions differ: the absolute magnitude distribution of galaxies associated with systems of rest frame equivalent width ≥ 0.8 Å (2750 lines of sight) seems to be approximated by that of elliptical-Sa type galaxies, whereas the absolute magnitude distribution of galaxies associated with systems of rest frame equivalent width < 0.8 Å (1010 lines of sight) seems to be approximated by that of Sa-Sbc type galaxies. However, on larger scales greater than 200 h-1 kpc, both distributions are broadly consistent with that of elliptical-Sa type galaxies. We note that, in a broader context, these results represent an estimate of the bright end of the galaxy luminosity function at a median redshift of z ˜ 0.65.

  20. On residual stresses and homeostasis: an elastic theory of functional adaptation in living matter.

    PubMed

    Ciarletta, P; Destrade, M; Gower, A L

    2016-04-26

    Living matter can functionally adapt to external physical factors by developing internal tensions, easily revealed by cutting experiments. Nonetheless, residual stresses intrinsically have a complex spatial distribution, and destructive techniques cannot be used to identify a natural stress-free configuration. This work proposes a novel elastic theory of pre-stressed materials. Imposing physical compatibility and symmetry arguments, we define a new class of free energies explicitly depending on the internal stresses. This theory is finally applied to the study of arterial remodelling, proving its potential for the non-destructive determination of the residual tensions within biological materials.

  1. Calibration of a universal indicated turbulence system

    NASA Technical Reports Server (NTRS)

    Chapin, W. G.

    1977-01-01

    Theoretical and experimental work on a Universal Indicated Turbulence Meter is described. A mathematical transfer function from turbulence input to output indication was developed. A random ergodic process and a Gaussian turbulence distribution were assumed. A calibration technique based on this transfer function was developed. The computer contains a variable gain amplifier to make the system output independent of average velocity. The range over which this independence holds was determined. An optimum dynamic response was obtained for the tubulation between the system pitot tube and pressure transducer by making dynamic response measurements for orifices of various lengths and diameters at the source end.

  2. Method for obtaining electron energy-density functions from Langmuir-probe data using a card-programmable calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Longhurst, G.R.

    This paper presents a method for obtaining electron energy density functions from Langmuir probe data taken in cool, dense plasmas where thin-sheath criteria apply and where magnetic effects are not severe. Noise is filtered out by using regression of orthogonal polynomials. The method requires only a programmable calculator (TI-59 or equivalent) to implement and can be used for the most general, nonequilibrium electron energy distribution plasmas. Data from a mercury ion source analyzed using this method are presented and compared with results for the same data using standard numerical techniques.

  3. Direct numerical solution of the Ornstein-Zernike integral equation and spatial distribution of water around hydrophobic molecules

    NASA Astrophysics Data System (ADS)

    Ikeguchi, Mitsunori; Doi, Junta

    1995-09-01

    The Ornstein-Zernike integral equation (OZ equation) has been used to evaluate the distribution function of solvents around solutes, but its numerical solution is difficult for molecules with a complicated shape. This paper proposes a numerical method to directly solve the OZ equation by introducing the 3D lattice. The method employs no approximation the reference interaction site model (RISM) equation employed. The method enables one to obtain the spatial distribution of spherical solvents around solutes with an arbitrary shape. Numerical accuracy is sufficient when the grid-spacing is less than 0.5 Å for solvent water. The spatial water distribution around a propane molecule is demonstrated as an example of a nonspherical hydrophobic molecule using iso-value surfaces. The water model proposed by Pratt and Chandler is used. The distribution agrees with the molecular dynamics simulation. The distribution increases offshore molecular concavities. The spatial distribution of water around 5α-cholest-2-ene (C27H46) is visualized using computer graphics techniques and a similar trend is observed.

  4. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  5. Storing files in a parallel computing system based on user-specified parser function

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  6. Reflectance of topologically disordered photonic-crystal films

    NASA Astrophysics Data System (ADS)

    Vigneron, Jean-Pol; Lousse, Virginie M.; Biro, Laszlo P.; Vertesy, Zofia; Balint, Zolt

    2005-04-01

    Periodicity implies the creation of discretely diffracted beams while various departures from periodicity lead to broadened scattering angles. This effect is investigated for disturbed lattices exhibiting randomly varying periods. In the Born approximation, the diffused reflection is shown to be related to a pair correlation function constructed from the distribution of the film scattering power. The technique is first applied to a natural photonic crystal found on the ventral side of the wings of the butterfly Cyanophrys remus, where scanning electron microscopy reveals the formation of polycrystalline photonic structures. Second, the disorder in the distribution of the cross-ribs on the scales another butterfly, Lycaena virgaureae, is investigated. The irregular arrangement of scatterers found in chitin structure of this insect produces light reflection in the long-wavelength part of the visible range, with a quite unusual broad directionality. The use of the pair correlation function allows to propose estimates of the diffusive spreading in these very different systems.

  7. What can the occult do for you?

    NASA Astrophysics Data System (ADS)

    Holwerda, B. W.; Keel, W. C.

    2017-03-01

    Interstellar dust is still a dominant uncertainty in Astronomy, limiting precision in e.g., cosmological distance estimates and models of how light is re-processed within a galaxy. When a foreground galaxy serendipitously overlaps a more distant one, the latter backlights the dusty structures in the nearer foreground galaxy. Such an overlapping or occulting galaxy pair can be used to measure the distribution of dust in the closest galaxy with great accuracy. The STARSMOG program uses Hubble to map the distribution of dust in foreground galaxies in fine (<100 pc) detail. Integral Field Unit (IFU) observations will map the effective extinction curve, disentangling the role of fine-scale geometry and grain composition on the path of light through a galaxy. The overlapping galaxy technique promises to deliver a clear understanding of the dust in galaxies: geometry, a probability function of dimming as a function of galaxy mass and radius, and its dependence on wavelength.

  8. Preliminary measurements of kinetic dust temperature using stereoscopic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Williams, Jeremiah; Thomas, Edward

    2004-11-01

    A dusty (or complex) plasma is a four-component system composed of ions, electrons, neutral particles and charged microparticles. The presence of the microparticle (i.e., dust) component alters the plasma environment, giving rise to a wide variety of new plasma phenomena. Recently, the Auburn Plasma Sciences Laboratory (PSL) has acquired and installed a stereoscopic PIV (stereo-PIV) diagnostic tool for dusty plasma investigations [Thomas, et. al., Phys. Plasmas, 11, L37 (2004)]. This presentation discusses the use of the stereo-PIV technique for determining the velocity space distribution function of the microparticle component of a dc glow discharge dusty plasma. These distribution functions are then used to make preliminary estimates of the kinetic temperature of the dust component. The data is compared to a simple energy balance model that relates the dust temperature to the electric field and neutral pressure.

  9. Vibration and bending behavior of functionally graded nanocomposite doubly-curved shallow shells reinforced by graphene nanoplatelets

    NASA Astrophysics Data System (ADS)

    Wang, Aiwen; Chen, Hongyan; Hao, Yuxin; Zhang, Wei

    2018-06-01

    Free vibration and static bending of functionally graded (FG) graphene nanoplatelet (GPL) reinforced composite doubly-curved shallow shells with three distinguished distributions are analyzed. Material properties with gradient variation in the thickness aspect are evaluated by the modified Halpin-Tsai model. Mathematical model of the simply supported doubly-curved shallow shells rests upon Hamilton Principle and a higher order shear deformation theory (HSDT). The free vibration frequencies and bending deflections are gained by taking into account Navier technique. The agreement between the obtained results and ANSYS as well as the prior results in the open literature verifies the accuracy of the theory in this article. Further, parametric studies are accomplished to highlight the significant influence of GPL distribution patterns and weight fraction, stratification number, dimensions of GPLs and shells on the mechanical behavior of the system.

  10. Phenomenology of leading nucleon production in e p collisions at HERA in the framework of fracture functions

    NASA Astrophysics Data System (ADS)

    Shoeibi, Samira; Taghavi-Shahri, F.; Khanpour, Hamzeh; Javidan, Kurosh

    2018-04-01

    In recent years, several experiments at the e-p collider HERA have collected high precision deep-inelastic scattering (DIS) data on the spectrum of leading nucleon carrying a large fraction of the proton's energy. In this paper, we have analyzed recent experimental data on the production of forward protons and neutrons in DIS at HERA in the framework of a perturbative QCD. We propose a technique based on the fractures functions framework, and extract the nucleon fracture functions (FFs) M2(n /p )(x ,Q2;xL) from global QCD analysis of DIS data measured by the ZEUS Collaboration at HERA. We have shown that an approach based on the fracture functions formalism allows us to phenomenologically parametrize the nucleon FFs. Considering both leading neutron as well as leading proton production data at HERA, we present the results for the separate parton distributions for all parton species, including valence quark densities, the antiquark densities, the strange sea distribution, and the gluon distribution functions. We proposed several parametrizations for the nucleon FFs and open the possibility of these asymmetries. The obtained optimum set of nucleon FFs is accompanied by Hessian uncertainty sets which allow one to propagate uncertainties to other observables interest. The extracted results for the t -integrated leading neutron F2LN (3 )(x ,Q2;xL) and leading proton F2LP (3 )(x ,Q2;xL) structure functions are in good agreement with all data analyzed, for a wide range of fractional momentum variable x as well as the longitudinal momentum fraction xL.

  11. In the search for the low-complexity sequences in prokaryotic and eukaryotic genomes: how to derive a coherent picture from global and local entropy measures

    NASA Astrophysics Data System (ADS)

    Acquisti, Claudia; Allegrini, Paolo; Bogani, Patrizia; Buiatti, Marcello; Catanese, Elena; Fronzoni, Leone; Grigolini, Paolo; Mersi, Giuseppe; Palatella, Luigi

    2004-04-01

    We investigate on a possible way to connect the presence of Low-Complexity Sequences (LCS) in DNA genomes and the nonstationary properties of base correlations. Under the hypothesis that these variations signal a change in the DNA function, we use a new technique, called Non-Stationarity Entropic Index (NSEI) method, and we prove that this technique is an efficient way to detect functional changes with respect to a random baseline. The remarkable aspect is that NSEI does not imply any training data or fitting parameter, the only arbitrarity being the choice of a marker in the sequence. We make this choice on the basis of biological information about LCS distributions in genomes. We show that there exists a correlation between changing the amount in LCS and the ratio of long- to short-range correlation.

  12. Play and learn team building.

    PubMed

    Haas, R C; Martin, S

    1997-05-01

    In order to have a team function correctly, power must be distributed equally, with no team member having more perceived power than any other. It is this leveling of the playing field that allows the team to develop and to stimulate the creative juices of its members. This article discusses techniques that can help an organization break down the power barriers and permit its employees to become a cohesive unit--a team.

  13. A Survey of Techniques for Security Architecture Analysis

    DTIC Science & Technology

    2003-05-01

    to be corrected immediately. 49 DSTO-TR-1438 A software phenomenon is the "user innovation network", examples of such networks being "free" and "open...source" software projects. These networks have innovation development, production, distribution and consumption all being performed by users/self...manufacturers. "User innovation networks can function entirely independently of manufacturers because (1) at least some users have sufficient incentive to

  14. Ground-Based Radiometric Measurements of Slant Path Attenuation in the V/W Bands

    DTIC Science & Technology

    2016-04-01

    GROUND-BASED RADIOMETRIC MEASUREMENTS OF SLANT PATH ATTENUATION IN THE V/W BANDS APRIL 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE...2. REPORT TYPE FINAL TECHNICAL REPORT 3. DATES COVERED (From - To) OCT 2012 – SEP 2015 4. TITLE AND SUBTITLE GROUND-BASED RADIOMETRIC MEASUREMENTS ...SUPPLEMENTARY NOTES 14. ABSTRACT Ground-based radiometric techniques were applied to measure the slant path attenuation cumulative distribution function to

  15. Delivery Time Variance Reduction in the Military Supply Chain

    DTIC Science & Technology

    2010-03-01

    Donald Rumsfeld, designated “U.S. Transportation Command as the single Department of Defense Distribution Process Owner (DPO)” (USTRANSCOM, 2004...paragraphs explain OptQuest’s 54 functionality and capabilities as described by Laguna (1997) and Glover et al. (1999) as well as the OptQuest for ARENA...throughout the solution space ( Glover et al., 1999). Heuristics are strategies (in this case algorithms) that use different techniques and available

  16. Calculating phase equilibrium properties of plasma pseudopotential model using hybrid Gibbs statistical ensemble Monte-Carlo technique

    NASA Astrophysics Data System (ADS)

    Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.

    2015-11-01

    Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.

  17. Heat transfer monitoring by means of the hot wire technique and finite element analysis software.

    PubMed

    Hernández Wong, J; Suarez, V; Guarachi, J; Calderón, A; Rojas-Trigos, J B; Juárez, A G; Marín, E

    2014-01-01

    It is reported the study of the radial heat transfer in a homogeneous and isotropic substance with a heat linear source in its axial axis. For this purpose, the hot wire characterization technique has been used, in order to obtain the temperature distribution as a function of radial distance from the axial axis and time exposure. Also, the solution of the transient heat transport equation for this problem was obtained under appropriate boundary conditions, by means of finite element technique. A comparison between experimental, conventional theoretical model and numerical simulated results is done to demonstrate the utility of the finite element analysis simulation methodology in the investigation of the thermal response of substances. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. SU-F-T-380: Comparing the Effect of Respiration On Dose Distribution Between Conventional Tangent Pair and IMRT Techniques for Adjuvant Radiotherapy in Early Stage Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, M; Ramaseshan, R

    2016-06-15

    Purpose: In this project, we compared the conventional tangent pair technique to IMRT technique by analyzing the dose distribution. We also investigated the effect of respiration on planning target volume (PTV) dose coverage in both techniques. Methods: In order to implement IMRT technique a template based planning protocol, dose constrains and treatment process was developed. Two open fields with optimized field weights were combined with two beamlet optimization fields in IMRT plans. We compared the dose distribution between standard tangential pair and IMRT. The improvement in dose distribution was measured by parameters such as conformity index, homogeneity index and coveragemore » index. Another end point was the IMRT technique will reduce the planning time for staff. The effect of patient’s respiration on dose distribution was also estimated. The four dimensional computed tomography (4DCT) for different phase of breathing cycle was used to evaluate the effect of respiration on IMRT planned dose distribution. Results: We have accumulated 10 patients that acquired 4DCT and planned by both techniques. Based on the preliminary analysis, the dose distribution in IMRT technique was better than conventional tangent pair technique. Furthermore, the effect of respiration in IMRT plan was not significant as evident from the 95% isodose line coverage of PTV drawn on all phases of 4DCT. Conclusion: Based on the 4DCT images, the breathing effect on dose distribution was smaller than what we expected. We suspect that there are two reasons. First, the PTV movement due to respiration was not significant. It might be because we used a tilted breast board to setup patients. Second, the open fields with optimized field weights in IMRT technique might reduce the breathing effect on dose distribution. A further investigation is necessary.« less

  19. Absolute electrical impedance tomography (aEIT) guided ventilation therapy in critical care patients: simulations and future trends.

    PubMed

    Denaï, Mouloud A; Mahfouf, Mahdi; Mohamad-Samuri, Suzani; Panoutsos, George; Brown, Brian H; Mills, Gary H

    2010-05-01

    Thoracic electrical impedance tomography (EIT) is a noninvasive, radiation-free monitoring technique whose aim is to reconstruct a cross-sectional image of the internal spatial distribution of conductivity from electrical measurements made by injecting small alternating currents via an electrode array placed on the surface of the thorax. The purpose of this paper is to discuss the fundamentals of EIT and demonstrate the principles of mechanical ventilation, lung recruitment, and EIT imaging on a comprehensive physiological model, which combines a model of respiratory mechanics, a model of the human lung absolute resistivity as a function of air content, and a 2-D finite-element mesh of the thorax to simulate EIT image reconstruction during mechanical ventilation. The overall model gives a good understanding of respiratory physiology and EIT monitoring techniques in mechanically ventilated patients. The model proposed here was able to reproduce consistent images of ventilation distribution in simulated acutely injured and collapsed lung conditions. A new advisory system architecture integrating a previously developed data-driven physiological model for continuous and noninvasive predictions of blood gas parameters with the regional lung function data/information generated from absolute EIT (aEIT) is proposed for monitoring and ventilator therapy management of critical care patients.

  20. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  1. Random field assessment of nanoscopic inhomogeneity of bone

    PubMed Central

    Dong, X. Neil; Luo, Qing; Sparkman, Daniel M.; Millwater, Harry R.; Wang, Xiaodu

    2010-01-01

    Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to present the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. PMID:20817128

  2. Estimating global distribution of boreal, temperate, and tropical tree plant functional types using clustering techniques

    NASA Astrophysics Data System (ADS)

    Wang, Audrey; Price, David T.

    2007-03-01

    A simple integrated algorithm was developed to relate global climatology to distributions of tree plant functional types (PFT). Multivariate cluster analysis was performed to analyze the statistical homogeneity of the climate space occupied by individual tree PFTs. Forested regions identified from the satellite-based GLC2000 classification were separated into tropical, temperate, and boreal sub-PFTs for use in the Canadian Terrestrial Ecosystem Model (CTEM). Global data sets of monthly minimum temperature, growing degree days, an index of climatic moisture, and estimated PFT cover fractions were then used as variables in the cluster analysis. The statistical results for individual PFT clusters were found consistent with other global-scale classifications of dominant vegetation. As an improvement of the quantification of the climatic limitations on PFT distributions, the results also demonstrated overlapping of PFT cluster boundaries that reflected vegetation transitions, for example, between tropical and temperate biomes. The resulting global database should provide a better basis for simulating the interaction of climate change and terrestrial ecosystem dynamics using global vegetation models.

  3. Ubiquitous Low-Cost Functionalized Multi-Walled Carbon Nanotube Sensors for Distributed Methane Leak Detection

    DOE PAGES

    Humayun, Md Tanim; Divan, Ralu; Stan, Liliana; ...

    2016-06-16

    This paper presents a highly sensitive, energy efficient and low-cost distributed methane (CH 4) sensor system (DMSS) for continuous monitoring, detection, and localization of CH 4 leaks in natural gas infrastructure, such as transmission and distribution pipelines, wells, and production pads. The CH 4 sensing element, a key component of the DMSS, consists of a metal oxide nanocrystal (MONC) functionalized multi-walled carbon nanotube (MWCNT) mesh which, in comparison to existing literature, shows stronger relative resistance change while interacting with lower parts per million (ppm) concentration of CH 4. A Gaussian plume triangulation algorithm has been developed for the DMSS. Givenmore » a geometric model of the surrounding environment the algorithm can precisely detect and localize a CH 4 leak as well as estimate its mass emission rate. A UV-based surface recovery technique making the sensor recover 10 times faster than the reported ones is presented for the DMSS. In conclusion, a control algorithm based on the UV-accelerated recovery is developed which facilitates faster leak detection.« less

  4. Influence of particle size distribution on reflected and transmitted light from clouds.

    PubMed

    Kattawar, G W; Plass, G N

    1968-05-01

    The light reflected and transmitted from clouds with various drop size distributions is calculated by a Monte Carlo technique. Six different models are used for the drop size distribution: isotropic, Rayleigh, haze continental, haze maritime, cumulus, and nimbostratus. The scattering function for each model is calculated from the Mie theory. In general, the reflected and transmitted radiances for the isotropic and Rayleigh models tend to be similar, as are those for the various haze and cloud models. The reflected radiance is less for the haze and cloud models than for the isotropic and Rayleigh models/except for an angle of incidence near the horizon when it is larger around the incident beam direction. The transmitted radiance is always much larger for the haze and cloud models near the incident direction; at distant angles it is less for small and moderate optical thicknesses and greater for large optical thicknesses (all comparisons to isotropic and Rayleigh models). The downward flux, cloud albedo, and ean optical path are discussed. The angular spread of the beam as a function of optical thickness is shown for the nimbostratus model.

  5. Fault detection and diagnosis for non-Gaussian stochastic distribution systems with time delays via RBF neural networks.

    PubMed

    Yi, Qu; Zhan-ming, Li; Er-chao, Li

    2012-11-01

    A new fault detection and diagnosis (FDD) problem via the output probability density functions (PDFs) for non-gausian stochastic distribution systems (SDSs) is investigated. The PDFs can be approximated by radial basis functions (RBFs) neural networks. Different from conventional FDD problems, the measured information for FDD is the output stochastic distributions and the stochastic variables involved are not confined to Gaussian ones. A (RBFs) neural network technique is proposed so that the output PDFs can be formulated in terms of the dynamic weighings of the RBFs neural network. In this work, a nonlinear adaptive observer-based fault detection and diagnosis algorithm is presented by introducing the tuning parameter so that the residual is as sensitive as possible to the fault. Stability and Convergency analysis is performed in fault detection and fault diagnosis analysis for the error dynamic system. At last, an illustrated example is given to demonstrate the efficiency of the proposed algorithm, and satisfactory results have been obtained. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  6. On the impact of neutron star binaries' natal-kick distribution on the Galactic r-process enrichment

    NASA Astrophysics Data System (ADS)

    Safarzadeh, Mohammadtaher; Côté, Benoit

    2017-11-01

    We study the impact of the neutron star binaries' (NSBs) natal-kick distribution on the galactic r-process enrichment. We model the growth of a Milky Way type halo based on N-body simulation results and its star formation history based on multi-epoch abundance matching techniques. We consider that the NSBs that merge well beyond the galaxy's effective radius (>2 × Reff) do not contribute to the galactic r-process enrichment. Assuming a power-law delay-time distribution (DTD) function (∝t-1) with tmin = 30 Myr for binaries' coalescence time-scales and an exponential profile for their natal-kick distribution with an average value of 180 km s-1, we show that up to ˜ 40 per cent of all formed NSBs do not contribute to the r-process enrichment by z = 0, either because they merge far from the galaxy at a given redshift (up to ˜ 25 per cent) or have not yet merged by today (˜ 15 per cent). Our result is largely insensitive to the details of the DTD function. Assuming a constant coalescence time-scale of 100 Myr well approximates the adopted DTD although with 30 per cent of the NSBs ending up not contributing to the r-process enrichment. Our results, although rather dependent on the adopted natal-kick distribution, represent the first step towards estimating the impact of natal kicks and DTD functions on the r-process enrichment of galaxies that would need to be incorporated in the hydrodynamical simulations.

  7. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  8. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE PAGES

    Fierce, Laura; McGraw, Robert L.

    2017-07-26

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  9. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fierce, Laura; McGraw, Robert L.

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  10. An investigation of kV CBCT image quality and dose reduction for volume-of-interest imaging using dynamic collimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, David, E-mail: david.parsons@dal.ca, E-mail: james.robar@cdha.nshealth.ca; Robar, James L., E-mail: david.parsons@dal.ca, E-mail: james.robar@cdha.nshealth.ca

    2015-09-15

    Purpose: The focus of this work was to investigate the improvements in image quality and dose reduction for volume-of-interest (VOI) kilovoltage-cone beam CT (CBCT) using dynamic collimation. Methods: A prototype iris aperture was used to track a VOI during a CBCT acquisition. The current aperture design is capable of 1D translation as a function of gantry angle and dynamic adjustment of the iris radius. The aperture occupies the location of the bow-tie filter on a Varian On-Board Imager system. CBCT and planar image quality were investigated as a function of aperture radius, while maintaining the same dose to the VOI,more » for a 20 cm diameter cylindrical water phantom with a 9 mm diameter bone insert centered on isocenter. Corresponding scatter-to-primary ratios (SPR) were determined at the detector plane with Monte Carlo simulation using EGSnrc. Dose distributions for various sizes VOI were modeled using a dynamic BEAMnrc library and DOSXYZnrc. The resulting VOI dose distributions were compared to full-field distributions. Results: SPR was reduced by a factor of 8.4 when decreasing iris diameter from 21.2 to 2.4 cm (at isocenter). Depending upon VOI location and size, dose was reduced to 16%–90% of the full-field value along the central axis plane and down to 4% along the axis of rotation, while maintaining the same dose to the VOI compared to full-field techniques. When maintaining constant dose to the VOI, this change in iris diameter corresponds to a factor increase of approximately 1.6 in image contrast and a factor decrease in image noise of approximately 1.2. This results in a measured gain in contrast-to-noise ratio by a factor of approximately 2.0. Conclusions: The presented VOI technique offers improved image quality for image-guided radiotherapy while sparing the surrounding volume of unnecessary dose compared to full-field techniques.« less

  11. New Developments and Geoscience Applications of Synchrotron Computed Microtomography (Invited)

    NASA Astrophysics Data System (ADS)

    Rivers, M. L.; Wang, Y.; Newville, M.; Sutton, S. R.; Yu, T.; Lanzirotti, A.

    2013-12-01

    Computed microtomography is the extension to micron spatial resolution of the CAT scanning technique developed for medical imaging. Synchrotron sources are ideal for the method, since they provide a monochromatic, parallel beam with high intensity. High energy storage rings such as the Advanced Photon Source at Argonne National Laboratory produce x-rays with high energy, high brilliance, and high coherence. All of these factors combine to produce an extremely powerful imaging tool for earth science research. Techniques that have been developed include: - Absorption and phase contrast computed tomography with spatial resolution below one micron. - Differential contrast computed tomography, imaging above and below the absorption edge of a particular element. - High-pressure tomography, imaging inside a pressure cell at pressures above 10GPa. - High speed radiography and tomography, with 100 microsecond temporal resolution. - Fluorescence tomography, imaging the 3-D distribution of elements present at ppm concentrations. - Radiographic strain measurements during deformation at high confining pressure, combined with precise x-ray diffraction measurements to determine stress. These techniques have been applied to important problems in earth and environmental sciences, including: - The 3-D distribution of aqueous and organic liquids in porous media, with applications in contaminated groundwater and petroleum recovery. - The kinetics of bubble formation in magma chambers, which control explosive volcanism. - Studies of the evolution of the early solar system from 3-D textures in meteorites - Accurate crystal size distributions in volcanic systems, important for understanding the evolution of magma chambers. - The equation-of-state of amorphous materials at high pressure using both direct measurements of volume as a function of pressure and also by measuring the change x-ray absorption coefficient as a function of pressure. - The location and chemical speciation of toxic elements such as arsenic and nickel in soils and in plant tissues in contaminated Superfund sites. - The strength of earth materials under the pressure and temperature conditions of the Earth's mantle, providing insights into plate tectonics and the generation of earthquakes.

  12. An Inverse Modeling Plugin for HydroDesktop using the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Osorio, C.; Over, M. W.; Rubin, Y.

    2011-12-01

    The CUAHSI Hydrologic Information System (HIS) software stack is based on an open and extensible architecture that facilitates the addition of new functions and capabilities at both the server side (using HydroServer) and the client side (using HydroDesktop). The HydroDesktop client plugin architecture is used here to expose a new scripting based plugin that makes use of the R statistics software as a means for conducting inverse modeling using the Method of Anchored Distributions (MAD). MAD is a Bayesian inversion technique for conditioning computational model parameters on relevant field observations yielding probabilistic distributions of the model parameters, related to the spatial random variable of interest, by assimilating multi-type and multi-scale data. The implementation of a desktop software tool for using the MAD technique is expected to significantly lower the barrier to use of inverse modeling in education, research, and resource management. The HydroDesktop MAD plugin is being developed following a community-based, open-source approach that will help both its adoption and long term sustainability as a user tool. This presentation will briefly introduce MAD, HydroDesktop, and the MAD plugin and software development effort.

  13. Simultaneous identification of optical constants and PSD of spherical particles by multi-wavelength scattering-transmittance measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Jun-You; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming

    2018-04-01

    An accurate and stable identification technique is developed to retrieve the optical constants and particle size distributions (PSDs) of particle system simultaneously from the multi-wavelength scattering-transmittance signals by using the improved quantum particle swarm optimization algorithm. The Mie theory are selected to calculate the directional laser intensity scattered by particles and the spectral collimated transmittance. The sensitivity and objective function distribution analysis were conducted to evaluate the mathematical properties (i.e. ill-posedness and multimodality) of the inverse problems under three different optical signals combinations (i.e. the single-wavelength multi-angle light scattering signal, the single-wavelength multi-angle light scattering and spectral transmittance signal, and the multi-angle light scattering and spectral transmittance signal). It was found the best global convergence performance can be obtained by using the multi-wavelength scattering-transmittance signals. Meanwhile, the present technique have been tested under different Gaussian measurement noise to prove its feasibility in a large solution space. All the results show that the inverse technique by using multi-wavelength scattering-transmittance signals is effective and suitable for retrieving the optical complex refractive indices and PSD of particle system simultaneously.

  14. Evaluation of Ultrasonic Fiber Structure Extraction Technique Using Autopsy Specimens of Liver

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tadashi; Hirai, Kazuki; Yamada, Hiroyuki; Ebara, Masaaki; Hachiya, Hiroyuki

    2005-06-01

    It is very important to diagnose liver cirrhosis noninvasively and correctly. In our previous studies, we proposed a processing technique to detect changes in liver tissue in vivo. In this paper, we propose the evaluation of the relationship between liver disease and echo information using autopsy specimens of a human liver in vitro. It is possible to verify the function of a processing parameter clearly and to compare the processing result and the actual human liver tissue structure by in vitro experiment. In the results of our processing technique, information that did not obey a Rayleigh distribution from the echo signal of the autopsy liver specimens was extracted depending on changes in a particular processing parameter. The fiber tissue structure of the same specimen was extracted from a number of histological images of stained tissue. We constructed 3D structures using the information extracted from the echo signal and the fiber structure of the stained tissue and compared the two. By comparing the 3D structures, it is possible to evaluate the relationship between the information that does not obey a Rayleigh distribution of the echo signal and the fibrosis structure.

  15. Application of a novel new multispectral nanoparticle tracking technique

    NASA Astrophysics Data System (ADS)

    McElfresh, Cameron; Harrington, Tyler; Vecchio, Kenneth S.

    2018-06-01

    Fast, reliable, and accurate particle size analysis techniques must meet the demands of evolving industrial and academic research in areas of functionalized nanoparticle synthesis, advanced materials development, and other nanoscale enabled technologies. In this study a new multispectral particle tracking analysis (m-PTA) technique enabled by the ViewSizer™ 3000 (MANTA Instruments, USA) was evaluated using solutions of monomodal and multimodal gold and polystyrene latex nanoparticles, as well as a spark eroded polydisperse 316L stainless steel nanopowder, and large (non-Brownian) borosilicate particles. It was found that m-PTA performed comparably to the DLS in evaluation of monomodal particle size distributions. When measuring bimodal, trimodal and polydisperse solutions, the m-PTA technique overwhelmingly outperformed traditional dynamic light scattering (DLS) in both peak detection and relative particle concentration analysis. It was also observed that the m-PTA technique is less susceptible to large particle overexpression errors. The ViewSizer™ 3000 was also found to be successful in accurately evaluating sizes and concentrations of monomodal and bimodal sinking borosilicate particles.

  16. Phase Distribution and Selection of Partially Correlated Persistent Scatterers

    NASA Astrophysics Data System (ADS)

    Lien, J.; Zebker, H. A.

    2012-12-01

    Interferometric synthetic aperture radar (InSAR) time-series methods can effectively estimate temporal surface changes induced by geophysical phenomena. However, such methods are susceptible to decorrelation due to spatial and temporal baselines (radar pass separation), changes in orbital geometries, atmosphere, and noise. These effects limit the number of interferograms that can be used for differential analysis and obscure the deformation signal. InSAR decorrelation effects may be ameliorated by exploiting pixels that exhibit phase stability across the stack of interferograms. These so-called persistent scatterer (PS) pixels are dominated by a single point-like scatterer that remains phase-stable over the spatial and temporal baseline. By identifying a network of PS pixels for use in phase unwrapping, reliable deformation measurements may be obtained even in areas of low correlation, where traditional InSAR techniques fail to produce useful observations. Many additional pixels can be added to the PS list if we are able to identify those in which a dominant scatterer exhibits partial, rather than complete, correlation across all radar scenes. In this work, we quantify and exploit the phase stability of partially correlated PS pixels. We present a new system model for producing interferometric pixel values from a complex surface backscatter function characterized by signal-to-clutter ratio (SCR). From this model, we derive the joint probabilistic distribution for PS pixel phases in a stack of interferograms as a function of SCR and spatial baselines. This PS phase distribution generalizes previous results that assume the clutter phase contribution is uncorrelated between radar passes. We verify the analytic distribution through a series of radar scattering simulations. We use the derived joint PS phase distribution with maximum-likelihood SCR estimation to analyze an area of the Hayward Fault Zone in the San Francisco Bay Area. We obtain a series of 38 interferometric images of the area from C-band ERS radar satellite passes between May 1995 and December 2000. We compare the estimated SCRs to those calculated with previously derived PS phase distributions. Finally, we examine the PS network density resulting from varying selection thresholds of SCR and compare to other PS identification techniques.

  17. Acoustic emissions diagnosis of rotor-stator rubs using the KS statistic

    NASA Astrophysics Data System (ADS)

    Hall, L. D.; Mba, D.

    2004-07-01

    Acoustic emission (AE) measurement at the bearings of rotating machinery has become a useful tool for diagnosing incipient fault conditions. In particular, AE can be used to detect unwanted intermittent or partial rubbing between a rotating central shaft and surrounding stationary components. This is a particular problem encountered in turbines used for power generation. For successful fault diagnosis, it is important to adopt AE signal analysis techniques capable of distinguishing between various types of rub mechanisms. It is also useful to develop techniques for inferring information such as the severity of rubbing or the type of seal material making contact on the shaft. It is proposed that modelling the cumulative distribution function of rub-induced AE signals with respect to appropriate theoretical distributions, and quantifying the goodness of fit with the Kolmogorov-Smirnov (KS) statistic, offers a suitable signal feature for diagnosis. This paper demonstrates the successful use of the KS feature for discriminating different classes of shaft-seal rubbing.

  18. Retrieval of Spatio-temporal Distributions of Particle Parameters from Multiwavelength Lidar Measurements Using the Linear Estimation Technique and Comparison with AERONET

    NASA Technical Reports Server (NTRS)

    Veselovskii, I.; Whiteman, D. N.; Korenskiy, M.; Kolgotin, A.; Dubovik, O.; Perez-Ramirez, D.; Suvorina, A.

    2013-01-01

    The results of the application of the linear estimation technique to multiwavelength Raman lidar measurements performed during the summer of 2011 in Greenbelt, MD, USA, are presented. We demonstrate that multiwavelength lidars are capable not only of providing vertical profiles of particle properties but also of revealing the spatio-temporal evolution of aerosol features. The nighttime 3 Beta + 1 alpha lidar measurements on 21 and 22 July were inverted to spatio-temporal distributions of particle microphysical parameters, such as volume, number density, effective radius and the complex refractive index. The particle volume and number density show strong variation during the night, while the effective radius remains approximately constant. The real part of the refractive index demonstrates a slight decreasing tendency in a region of enhanced extinction coefficient. The linear estimation retrievals are stable and provide time series of particle parameters as a function of height at 4 min resolution. AERONET observations are compared with multiwavelength lidar retrievals showing good agreement.

  19. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  20. Classification of JET Neutron and Gamma Emissivity Profiles

    NASA Astrophysics Data System (ADS)

    Craciunescu, T.; Murari, A.; Kiptily, V.; Vega, J.; Contributors, JET

    2016-05-01

    In thermonuclear plasmas, emission tomography uses integrated measurements along lines of sight (LOS) to determine the two-dimensional (2-D) spatial distribution of the volume emission intensity. Due to the availability of only a limited number views and to the coarse sampling of the LOS, the tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET. In specific experimental conditions the availability of LOSs is restricted to a single view. In this case an explicit reconstruction of the emissivity profile is no longer possible. However, machine learning classification methods can be used in order to derive the type of the distribution. In the present approach the classification is developed using the theory of belief functions which provide the support to fuse the results of independent clustering and supervised classification. The method allows to represent the uncertainty of the results provided by different independent techniques, to combine them and to manage possible conflicts.

  1. Research into a distributed fault diagnosis system and its application

    NASA Astrophysics Data System (ADS)

    Qian, Suxiang; Jiao, Weidong; Lou, Yongjian; Shen, Xiaomei

    2005-12-01

    CORBA (Common Object Request Broker Architecture) is a solution to distributed computing methods over heterogeneity systems, which establishes a communication protocol between distributed objects. It takes great emphasis on realizing the interoperation between distributed objects. However, only after developing some application approaches and some practical technology in monitoring and diagnosis, can the customers share the monitoring and diagnosis information, so that the purpose of realizing remote multi-expert cooperation diagnosis online can be achieved. This paper aims at building an open fault monitoring and diagnosis platform combining CORBA, Web and agent. Heterogeneity diagnosis object interoperate in independent thread through the CORBA (soft-bus), realizing sharing resource and multi-expert cooperation diagnosis online, solving the disadvantage such as lack of diagnosis knowledge, oneness of diagnosis technique and imperfectness of analysis function, so that more complicated and further diagnosis can be carried on. Take high-speed centrifugal air compressor set for example, we demonstrate a distributed diagnosis based on CORBA. It proves that we can find out more efficient approaches to settle the problems such as real-time monitoring and diagnosis on the net and the break-up of complicated tasks, inosculating CORBA, Web technique and agent frame model to carry on complemental research. In this system, Multi-diagnosis Intelligent Agent helps improve diagnosis efficiency. Besides, this system offers an open circumstances, which is easy for the diagnosis objects to upgrade and for new diagnosis server objects to join in.

  2. Dual-band beacon experiment over Southeast Asia for ionospheric irregularity analysis

    NASA Astrophysics Data System (ADS)

    Watthanasangmechai, K.; Yamamoto, M.; Saito, A.; Saito, S.; Maruyama, T.; Tsugawa, T.; Nishioka, M.

    2013-12-01

    An experiment of dual-band beacon over Southeast Asia was started in March 2012 in order to capture and analyze ionospheric irregularities in equatorial region. Five GNU Radio Beacon Receivers (GRBRs) were aligned along 100 degree geographic longitude. The distances between the stations reach more than 500 km. The field of view of this observational network covers +/- 20 degree geomagnetic latitude including the geomagnetic equator. To capture ionospheric irregularities, the absolute TEC estimation technique was developed. The two-station method (Leitinger et al., 1975) is generally accepted as a suitable method to estimate TEC offsets of dual-band beacon experiment. However, the distances between the stations directly affect on the robustness of the technique. In Southeast Asia, the observational network is too sparse to attain a benefit of the classic two-station method. Moreover, the least-squares approch used in the two-station method tries too much to adjust the small scales of the TEC distribution which are the local minima. We thus propose a new technique to estimate the TEC offsets with the supporting data from absolute GPS-TEC from local GPS receivers and the ionospheric height from local ionosondes. The key of the proposed technique is to utilize the brute-force technique with weighting function to find the TEC offset set that yields a global minimum of RMSE in whole parameter space. The weight is not necessary when the TEC distribution is smooth, while it significantly improves the TEC estimation during the ESF events. As a result, the latitudinal TEC shows double-hump distribution because of the Equatorial Ionization Anomaly (EIA). In additions, the 100km-scale fluctuations from an Equatorial Spread F (ESF) are captured at night time in equinox seasons. The plausible linkage of the meridional wind with triggering of ESF is under invatigating and will be presented. The proposed method is successful to estimate the latitudinal TEC distribution from dual-band frequency beacon data for the sparse observational network in Southeast Asia which may be useful for other equatorial sectors like Affrican region as well.

  3. TU-AB-BRB-01: Coverage Evaluation and Probabilistic Treatment Planning as a Margin Alternative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siebers, J.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  4. TU-AB-BRB-03: Coverage-Based Treatment Planning to Accommodate Organ Deformable Motions and Contouring Uncertainties for Prostate Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, H.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  5. TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unkelbach, J.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  6. TU-AB-BRB-00: New Methods to Ensure Target Coverage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  7. The SIMRAND methodology: Theory and application for the simulation of research and development projects

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    A research and development (R&D) project often involves a number of decisions that must be made concerning which subset of systems or tasks are to be undertaken to achieve the goal of the R&D project. To help in this decision making, SIMRAND (SIMulation of Research ANd Development Projects) is a methodology for the selection of the optimal subset of systems or tasks to be undertaken on an R&D project. Using alternative networks, the SIMRAND methodology models the alternative subsets of systems or tasks under consideration. Each path through an alternative network represents one way of satisfying the project goals. Equations are developed that relate the system or task variables to the measure of reference. Uncertainty is incorporated by treating the variables of the equations probabilistically as random variables, with cumulative distribution functions assessed by technical experts. Analytical techniques of probability theory are used to reduce the complexity of the alternative networks. Cardinal utility functions over the measure of preference are assessed for the decision makers. A run of the SIMRAND Computer I Program combines, in a Monte Carlo simulation model, the network structure, the equations, the cumulative distribution functions, and the utility functions.

  8. Analytical approaches to the determination of spin-dependent parton distribution functions at NNLO approximation

    NASA Astrophysics Data System (ADS)

    Salajegheh, Maral; Nejad, S. Mohammad Moosavi; Khanpour, Hamzeh; Tehrani, S. Atashbar

    2018-05-01

    In this paper, we present SMKA18 analysis, which is a first attempt to extract the set of next-to-next-leading-order (NNLO) spin-dependent parton distribution functions (spin-dependent PDFs) and their uncertainties determined through the Laplace transform technique and Jacobi polynomial approach. Using the Laplace transformations, we present an analytical solution for the spin-dependent Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution equations at NNLO approximation. The results are extracted using a wide range of proton g1p(x ,Q2) , neutron g1n(x ,Q2) , and deuteron g1d(x ,Q2) spin-dependent structure functions data set including the most recent high-precision measurements from COMPASS16 experiments at CERN, which are playing an increasingly important role in global spin-dependent fits. The careful estimations of uncertainties have been done using the standard Hessian error propagation. We will compare our results with the available spin-dependent inclusive deep inelastic scattering data set and other results for the spin-dependent PDFs in literature. The results obtained for the spin-dependent PDFs as well as spin-dependent structure functions are clearly explained both in the small and large values of x .

  9. Description of the SSF PMAD DC testbed control system data acquisition function

    NASA Technical Reports Server (NTRS)

    Baez, Anastacio N.; Mackin, Michael; Wright, Theodore

    1992-01-01

    The NASA LeRC in Cleveland, Ohio has completed the development and integration of a Power Management and Distribution (PMAD) DC Testbed. This testbed is a reduced scale representation of the end to end, sources to loads, Space Station Freedom Electrical Power System (SSF EPS). This unique facility is being used to demonstrate DC power generation and distribution, power management and control, and system operation techniques considered to be prime candidates for the Space Station Freedom. A key capability of the testbed is its ability to be configured to address system level issues in support of critical SSF program design milestones. Electrical power system control and operation issues like source control, source regulation, system fault protection, end-to-end system stability, health monitoring, resource allocation, and resource management are being evaluated in the testbed. The SSF EPS control functional allocation between on-board computers and ground based systems is evolving. Initially, ground based systems will perform the bulk of power system control and operation. The EPS control system is required to continuously monitor and determine the current state of the power system. The DC Testbed Control System consists of standard controllers arranged in a hierarchical and distributed architecture. These controllers provide all the monitoring and control functions for the DC Testbed Electrical Power System. Higher level controllers include the Power Management Controller, Load Management Controller, Operator Interface System, and a network of computer systems that perform some of the SSF Ground based Control Center Operation. The lower level controllers include Main Bus Switch Controllers and Photovoltaic Controllers. Power system status information is periodically provided to the higher level controllers to perform system control and operation. The data acquisition function of the control system is distributed among the various levels of the hierarchy. Data requirements are dictated by the control system algorithms being implemented at each level. A functional description of the various levels of the testbed control system architecture, the data acquisition function, and the status of its implementationis presented.

  10. Volcanic Signatures in Estimates of Stratospheric Aerosol Size, Distribution Width, Surface Area, and Volume Deduced from Global Satellite-Based Observations

    NASA Technical Reports Server (NTRS)

    Bauman, J. J.; Russell, P. B.

    2000-01-01

    Volcanic signatures in the stratospheric aerosol layer are revealed by two independent techniques which retrieve aerosol information from global satellite-based observations of particulate extinction. Both techniques combine the 4-wavelength Stratospheric Aerosol and Gas Experiment (SAGE) II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument. The algorithms use the SAGE II/CLAES composite extinction spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub R). The first technique is a multi-wavelength Look-Up-Table (LUT) algorithm which retrieves values and uncertainties of R(sub eff) by comparing ratios of extinctions from SAGE II and CLAES (e.g., E(sub lambda)/E(sub 1.02) to pre-computed extinction ratios which are based on a range of unimodal lognormal size distributions. The pre-computed ratios are presented as a function of R(sub eff) for a given sigma(sub g); thus the comparisons establish the range of R(sub eff) consistent with the measured spectra for that sigma(sub g). The fact that no solutions are found for certain sigma(sub g) values provides information on the acceptable range of sigma(sub g), which is found to evolve in response to volcanic injections and removal periods. Analogous comparisons using absolute extinction spectra and error bars establish the range of S and V. The second technique is a Parameter Search Technique (PST) which estimates R(sub eff) and sigma(sub g) within a month-latitude-altitude bin by minimizing the chi-squared values obtained by comparing the SAGE II/CLAES extinction spectra and error bars with spectra calculated by varying the lognormal fitting parameters: R(sub eff), sigma(sub g), and the total number of particles N(sub 0). For both techniques, possible biases in retrieved-parameters caused by assuming a unimodal functional form are removed using correction factors computed from representative in situ measurements of bimodal size distributions. Some interesting features revealed by the LUT and PST retrievals include: (1) Increases in S and V (but not R(sub eff)) after the Ruiz and Kelut injections, (2) Increases in S, V, R(sub eff) after Pinatubo, (3) Post-Pinatubo increases in S, V, and R(sub eff) that are more rapid in the tropics than elsewhere, (4) Mid-latitude post-Pinatubo increases in R(sub eff) that lag increases in S and V, (5) S and V returning to pre-Pinatubo values sooner than R(sub eff) does, (6) Sharp increases in sigma(sub g), after Pinatubo and slight increases in sigma(sub g) after Ruiz, Etna, Kelut, Spurr and Rabaul, and (7) Gradual declines in the heights at which R(sub eff), S and V peak after Pinatubo.

  11. Fly's eye condenser based on chirped microlens arrays

    NASA Astrophysics Data System (ADS)

    Wippermann, Frank C.; Zeitner, Uwe-D.; Dannberg, Peter; Bräuer, Andreas; Sinzinger, Stefan

    2007-09-01

    Lens array arrangements are commonly used for the beam shaping of almost arbitrary input intensity distributions into a top-hat. The setup usually consists of a Fourier lens and two identical regular microlens arrays - often referred to as tandem lens array - where the second one is placed in the focal plane of the first microlenses. Due to the periodic structure of regular arrays the output intensity distribution is modulated by equidistant sharp intensity peaks which are disturbing the homogeneity. The equidistantly located intensity peaks can be suppressed when using a chirped and therefore non-periodic microlens array. A far field speckle pattern with more densely and irregularly located intensity peaks results leading to an improved homogeneity of the intensity distribution. In contrast to stochastic arrays, chirped arrays consist of individually shaped lenses defined by a parametric description of the cells optical function which can be derived completely from analytical functions. This gives the opportunity to build up tandem array setups enabling to achieve far field intensity distribution with an envelope of a top-hat. We propose a new concept for fly's eye condensers incorporating a chirped tandem microlens array for the generation of a top-hat far field intensity distribution with improved homogenization under coherent illumination. The setup is compliant to reflow of photoresist as fabrication technique since plane substrates accommodating the arrays are used. Considerations for the design of the chirped microlens arrays, design rules, wave optical simulations and measurements of the far field intensity distributions are presented.

  12. Theoretical study of the influence of a heterogeneous activity distribution on intratumoral absorbed dose distribution.

    PubMed

    Bao, Ande; Zhao, Xia; Phillips, William T; Woolley, F Ross; Otto, Randal A; Goins, Beth; Hevezi, James M

    2005-01-01

    Radioimmunotherapy of hematopoeitic cancers and micrometastases has been shown to have significant therapeutic benefit. The treatment of solid tumors with radionuclide therapy has been less successful. Previous investigations of intratumoral activity distribution and studies on intratumoral drug delivery suggest that a probable reason for the disappointing results in solid tumor treatment is nonuniform intratumoral distribution coupled with restricted intratumoral drug penetrance, thus inhibiting antineoplastic agents from reaching the tumor's center. This paper describes a nonuniform intratumoral activity distribution identified by limited radiolabeled tracer diffusion from tumor surface to tumor center. This activity was simulated using techniques that allowed the absorbed dose distributions to be estimated using different intratumoral diffusion capabilities and calculated for tumors of varying diameters. The influences of these absorbed dose distributions on solid tumor radionuclide therapy are also discussed. The absorbed dose distribution was calculated using the dose point kernel method that provided for the application of a three-dimensional (3D) convolution between a dose rate kernel function and an activity distribution function. These functions were incorporated into 3D matrices with voxels measuring 0.10 x 0.10 x 0.10 mm3. At this point fast Fourier transform (FFT) and multiplication in frequency domain followed by inverse FFT (iFFT) were used to effect this phase of the dose calculation process. The absorbed dose distribution for tumors of 1, 3, 5, 10, and 15 mm in diameter were studied. Using the therapeutic radionuclides of 131I, 186Re, 188Re, and 90Y, the total average dose, center dose, and surface dose for each of the different tumor diameters were reported. The absorbed dose in the nearby normal tissue was also evaluated. When the tumor diameters exceed 15 mm, a much lower tumor center dose is delivered compared with tumors between 3 and 5 mm in diameter. Based on these findings, the use of higher beta-energy radionuclides, such as 188Re and 90Y is more effective in delivering a higher absorbed dose to the tumor center at tumor diameters around 10 mm.

  13. Assisting Australian indigenous resource management and sustainable utilization of species through the use of GIS and environmental modeling techniques.

    PubMed

    Gorman, Julian; Pearson, Diane; Whitehead, Peter

    2008-01-01

    Information on distribution and relative abundance of species is integral to sustainable management, especially if they are to be harvested for subsistence or commerce. In northern Australia, natural landscapes are vast, centers of population few, access is difficult, and Aboriginal resource centers and communities have limited funds and infrastructure. Consequently defining distribution and relative abundance by comprehensive ground survey is difficult and expensive. This highlights the need for simple, cheap, automated methodologies to predict the distribution of species in use, or having potential for use, in commercial enterprise. The technique applied here uses a Geographic Information System (GIS) to make predictions of probability of occurrence using an inductive modeling technique based on Bayes' theorem. The study area is in the Maningrida region, central Arnhem Land, in the Northern Territory, Australia. The species examined, Cycas arnhemica and Brachychiton diversifolius, are currently being 'wild harvested' in commercial trials, involving sale of decorative plants and use as carving wood, respectively. This study involved limited and relatively simple ground surveys requiring approximately 7 days of effort for each species. The overall model performance was evaluated using Cohen's kappa statistics. The predictive ability of the model for C. arnhemica was classified as moderate and for B. diversifolius as fair. The difference in model performance can be attributed to the pattern of distribution of these species. C. arnhemica tends to occur in a clumped distribution due to relatively short distance dispersal of its large seeds and vegetative growth from long-lived rhizomes, while B. diversifolius seeds are smaller and more widely dispersed across the landscape. The output from analysis predicts trends in species distribution that are consistent with independent on-site sampling for each species and therefore should prove useful in gauging the extent of resource availability. However, some caution needs to be applied as the models tend to over predict presence which is a function of distribution patterns and of other variables operating in the landscape such as fire histories which were not included in the model due to limited availability of data.

  14. Visualizing nanoscale 3D compositional fluctuation of lithium in advanced lithium-ion battery cathodes

    PubMed Central

    Devaraj, A.; Gu, M.; Colby, R.; Yan, P.; Wang, C. M.; Zheng, J. M.; Xiao, J.; Genc, A.; Zhang, J. G.; Belharouak, I.; Wang, D.; Amine, K.; Thevuthasan, S.

    2015-01-01

    The distribution of cations in Li-ion battery cathodes as a function of cycling is a pivotal characteristic of battery performance. The transition metal cation distribution has been shown to affect cathode performance; however, Li is notoriously challenging to characterize with typical imaging techniques. Here laser-assisted atom probe tomography (APT) is used to map the three-dimensional distribution of Li at a sub-nanometre spatial resolution and correlate it with the distribution of the transition metal cations (M) and the oxygen. As-fabricated layered Li1.2Ni0.2Mn0.6O2 is shown to have Li-rich Li2MO3 phase regions and Li-depleted Li(Ni0.5Mn0.5)O2 regions. Cycled material has an overall loss of Li in addition to Ni-, Mn- and Li-rich regions. Spinel LiNi0.5Mn1.5O4 is shown to have a uniform distribution of all cations. APT results were compared to energy dispersive spectroscopy mapping with a scanning transmission electron microscope to confirm the transition metal cation distribution. PMID:26272722

  15. The Brain as a Distributed Intelligent Processing System: An EEG Study

    PubMed Central

    da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo

    2011-01-01

    Background Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion The present results support these claims and the neural efficiency hypothesis. PMID:21423657

  16. Optically controlled electrophoresis with a photoconductive substrate

    NASA Astrophysics Data System (ADS)

    Inami, Wataru; Nagashima, Taiki; Kawata, Yoshimasa

    2018-05-01

    A photoconductive substrate is used to perform electrophoresis. Light-induced micro-particle flow manipulation is demonstrated without using a fabricated flow channel. The path along which the particles were moved was formed by an illuminated light pattern on the substrate. Because the substrate conductivity and electric field distribution can be modified by light illumination, the forces acting on the particles can be controlled. This technique has potential applications as a high functionality analytical device.

  17. A differential delay equation arising from the sieve of Eratosthenes

    NASA Technical Reports Server (NTRS)

    Cheer, A. Y.; Goldston, D. A.

    1990-01-01

    Consideration is given to the differential delay equation introduced by Buchstab (1937) in connection with an asymptotic formula for the uncanceled terms in the sieve of Eratosthenes. Maier (1985) used this result to show there is unexpected irreqularity in the distribution of primes in short intervals. The function omega(u) is studied in this paper using numerical and analytical techniques. The results are applied to give some numerical constants in Maier's theorem.

  18. Amended FY 1988/1989 Biennial Budget Justification of Estimates Submitted to Congress

    DTIC Science & Technology

    1988-02-01

    will be installed in order to provide privacy for network data traffic. - Distributed systems technology will continue to be explored, including...techniques and geophysical and satellite data bases to make relevant information rapidly available to the ".fi /./ P ... . C / AMENDED FY 1988/1989...semantic model of available functions and data , %. AMENDED FV 1988/1989 BIENNIAL BUDGET RDT&E DESCRIPTIVE SUMMARY Program Element: fQ1Q0j, Title

  19. GLACiAR: GaLAxy survey Completeness AlgoRithm

    NASA Astrophysics Data System (ADS)

    Carrasco, Daniela; Trenti, Michele; Mutch, Simon; Oesch, Pascal

    2018-05-01

    GLACiAR (GaLAxy survey Completeness AlgoRithm) estimates the completeness and selection functions in galaxy surveys. Tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman Break technique, the code can nevertheless be applied broadly. GLACiAR generates artificial galaxies that follow Sérsic profiles with different indexes and with customizable size, redshift and spectral energy distribution properties, adds them to input images, and measures the recovery rate.

  20. Plasma Theory and Simulation

    DTIC Science & Technology

    1988-06-30

    equation using finite difference methods. The distribution function is represented by a large number of particles. The particle’s velocities change as a...Small angle Coulomb collisions The FP equation for describing small angle Coulomb collisions can be solved numerically using finite difference techniques...A finite Fourrier transform (FT) is made in z, then we can solve for each k using the following finite difference scheme [5]: 2{r 1 +l1 2 (,,+ 1 - fj

Top